Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

IEDO-net: Optimized Resnet50 for the classification of COVID-19

  • The emergence of COVID-19 has broken the silence of humanity and people are gradually becoming concerned about pneumonia-related diseases; thus, improving the recognition rate of pneumonia-related diseases is an important task. Neural networks have a remarkable effectiveness in medical diagnoses, though the internal parameters need to be set in accordance to different data sets; therefore, an important challenge is how to further improve the efficiency of neural network models. In this paper, we proposed a learning exponential distribution optimizer based on chaotic evolution, and we optimized Resnet50 for COVID classification, in which the model is abbreviated as IEDO-net. The algorithm introduces a criterion for judging the distance of the signal-to-noise ratio, a chaotic evolution mechanism is designed according to this criterion to effectively improve the search efficiency of the algorithm, and a rotating flight mechanism is introduced to improve the search capability of the algorithm. In the computed tomography (CT) image data of COVID-19, the accuracy, sensitivity, specificity, precision, and F1 score of the optimized Resnet50 were 94.42%, 93.40%, 94.92%, 94.29% and 93.84%, respectively. The proposed network model is compared with other algorithms and models, and ablation experiments and convergence and statistical analyses are performed. The results show that the diagnostic performance of IEDO-net is competitive, which validates the feasibility and effectiveness of the proposed network.

    Citation: Chengtian Ouyang, Huichuang Wu, Jiaying Shen, Yangyang Zheng, Rui Li, Yilin Yao, Lin Zhang. IEDO-net: Optimized Resnet50 for the classification of COVID-19[J]. Electronic Research Archive, 2023, 31(12): 7578-7601. doi: 10.3934/era.2023383

    Related Papers:

    [1] Dan-Ni Chen, Jing Cheng, Xiao Shen, Pan Zhang . Semi-stable quiver bundles over Gauduchon manifolds. AIMS Mathematics, 2023, 8(5): 11546-11556. doi: 10.3934/math.2023584
    [2] Fengxia Zhang, Ying Li, Jianli Zhao . The semi-tensor product method for special least squares solutions of the complex generalized Sylvester matrix equation. AIMS Mathematics, 2023, 8(3): 5200-5215. doi: 10.3934/math.2023261
    [3] Shu-Xin Miao, Jing Zhang . On Uzawa-SSI method for non-Hermitian saddle point problems. AIMS Mathematics, 2020, 5(6): 7301-7315. doi: 10.3934/math.2020467
    [4] Jiao Xu, Hairui Zhang, Lina Liu, Huiting Zhang, Yongxin Yuan . A unified treatment for the restricted solutions of the matrix equation AXB=C. AIMS Mathematics, 2020, 5(6): 6594-6608. doi: 10.3934/math.2020424
    [5] Yuezhen Ren, Ruihu Li, Guanmin Guo . New entanglement-assisted quantum codes constructed from Hermitian LCD codes. AIMS Mathematics, 2023, 8(12): 30875-30881. doi: 10.3934/math.20231578
    [6] Shahroud Azami . Monotonicity of eigenvalues of Witten-Laplace operator along the Ricci-Bourguignon flow. AIMS Mathematics, 2017, 2(2): 230-243. doi: 10.3934/Math.2017.2.230
    [7] Lv Zhang, Qingbiao Wu . Modified Newton-EHS method for solving nonlinear problems with complex symmetric Jacobian matrices. AIMS Mathematics, 2023, 8(10): 24233-24253. doi: 10.3934/math.20231236
    [8] Reena Jain, Hemant Kumar Nashine, Jung Rye Lee, Choonkil Park . Unified relational-theoretic approach in metric-like spaces with an application. AIMS Mathematics, 2021, 6(8): 8959-8977. doi: 10.3934/math.2021520
    [9] Ghodratallah Fasihi-Ramandi . Hamilton’s gradient estimate for fast diffusion equations under geometric flow. AIMS Mathematics, 2019, 4(3): 497-505. doi: 10.3934/math.2019.3.497
    [10] Sourav Shil, Hemant Kumar Nashine . Positive definite solution of non-linear matrix equations through fixed point technique. AIMS Mathematics, 2022, 7(4): 6259-6281. doi: 10.3934/math.2022348
  • The emergence of COVID-19 has broken the silence of humanity and people are gradually becoming concerned about pneumonia-related diseases; thus, improving the recognition rate of pneumonia-related diseases is an important task. Neural networks have a remarkable effectiveness in medical diagnoses, though the internal parameters need to be set in accordance to different data sets; therefore, an important challenge is how to further improve the efficiency of neural network models. In this paper, we proposed a learning exponential distribution optimizer based on chaotic evolution, and we optimized Resnet50 for COVID classification, in which the model is abbreviated as IEDO-net. The algorithm introduces a criterion for judging the distance of the signal-to-noise ratio, a chaotic evolution mechanism is designed according to this criterion to effectively improve the search efficiency of the algorithm, and a rotating flight mechanism is introduced to improve the search capability of the algorithm. In the computed tomography (CT) image data of COVID-19, the accuracy, sensitivity, specificity, precision, and F1 score of the optimized Resnet50 were 94.42%, 93.40%, 94.92%, 94.29% and 93.84%, respectively. The proposed network model is compared with other algorithms and models, and ablation experiments and convergence and statistical analyses are performed. The results show that the diagnostic performance of IEDO-net is competitive, which validates the feasibility and effectiveness of the proposed network.



    Let (M,ω) be a complex n-dimensional compact Hermitian manifold and χ be a smooth real (1, 1)-form on (M,ω). Γωk is the set of all real (1, 1)-forms whose eigenvalues belong to the k-positive cone Γk. For any uC2(M), we can get a new (1, 1)-form

    χu:=χ+1¯u.

    In any local coordinate chart, χu can be expressed as

    χu=1(χi¯j+ui¯j)dzid¯zj.

    In this article, we study the following form of parabolic Hessian quotient equations

    {u(x,t)t=logCknχkuωnkClnχluωnllogϕ(x,u), (x,t)M×[0,T),u(x,0)=u0(x),  xM, (1.1)

    where 0l<kn, [0,T) is the maximum time interval in which the solution exists and ϕ(x,z)C(M×R) is a given strictly positive function.

    The study of the parabolic flows is motivated by complex equations

    χkuωnk=ClnCknϕ(x,u)χluωnl, χuΓωk. (1.2)

    Equation (1.2) include some important geometry equations, for example, complex Monge-Ampère equation and Donaldson equation [6], which have attracted extensive attention in mathematics and physics since Yau's breakthrough in the Calabi conjecture [28]. Since Eq (1.2) are fully nonlinear elliptic, a classical way to solve them is the continuity method. Using this method, the complex Monge-Ampère equation

    χnu=ϕ(x)ωn, χuΓωn

    was solved by Yau [28]. Donaldson equation

    χnu=MχnMχωn1χuωn1, χuΓωn

    was independently solved by Li-Shi-Yao [11], Collins-Szèkelyhidi [3] and Sun [17]. Equation (1.2) also include the complex k-Hessian equation and complex Hessian quotient equation, which, respectively, correspond to

    Cknχkuωnk=ϕ(x)ωn, χuΓωk,
    χkuωnk=ClnCknϕ(x)χluωnl, χuΓωk.

    Dinew and Kolodziej [7] proved a Liouville type theorem for m-subharmonic functions in Cn, and combining with the estimate of Hou-Ma-Wu [10], solved the complex k-Hessian equation by using the continuity method. Under the cone condition, Sun [16] solved the complex Hessian quotient equation by using the continuity method. There have been many extensive studies for complex Monge-Ampère equation, Donaldson equation, the complex k-Hessian equation and the complex Hessian quotient equation on closed complex manifolds, see, e.g., [4,12,20,22,23,29,30]. When the right hand side function ϕ in Eq (1.2) depends on u, that is ϕ=ϕ(x,u), it is interesting to ask whether we can solve them. We intend to solve (1.2) by the parabolic flow method.

    Equation (1.1) covers some of the important geometric flows in complex geometry. If k=n and l=0, (1.1) is known as the complex Monge-Ampère flow

    u(x,t)t=logχnuωnlogϕ(x), (x,t)M×[0,T),

    which is equivalent to the Kähler-Ricci flow. The result of Yau [28] was reproduced by Cao [2] through Kähler-Ricci flow. Using the complex Monge-Ampère flow, similar results on a compact Hermitian manifold and a compact almost Hermitian manifold were proved by Gill [9] and Chu [5], respectively. To study the normalized twisted Chern-Ricci flow

    ωtt=Ric(ωt)ωt+η,

    which is equivalent to the following Mong-Ampère flow

    φt=log(θt+ddcφ)nΩφ,

    [25,26] considered the following complex Monge-Ampère flow

    φt=log(θt+ddcφ)nΩF(t,x,φ),

    where Ω is a smooth volume form on M. From this, we can see that the given function ϕ depends on u in some geometric flows. If l=0, (1.1) is called as the complex k-Hessian flow

    u(x,t)t=logCknχkuωnkωnlogϕ(x,u), (x,t)M×[0,T).

    The solvability of complex k-Hessian flow was showed by Sheng-Wang [21].

    In this paper, our research can be viewed as a generalization of Tô's work in [26] and Sheng-Wang's work in [21]. To solve the complex Hessian quotient flow, the condition of the parabolic C-subsolution is needed. According to Phong and Tô [14], we can give the definition of the parabolic C-subsolution to Eq (1.1).

    Definition 1.1. Let u_(x,t)C2,1(M×[0,T)) and χu_Γωk, if there exist constants δ, R>0, such that for any (x,t)M×[0,T),

    logσk(λ)σl(λ)tu_logϕ(x,u_),  λλ(u_)+δIΓn,

    implies that

    |λ|<R,

    then u_ is said to be a parabolic C-subsolution of (1.1), where λ(u_) denotes eigenvalue set of χu_.

    Obviously, we can give the equivalent definition of parabolic C-subsolution of (1.1).

    Definition 1.2. Let u_(x,t)C2,1(M×[0,T)) and χu_Γωk, if there exist constant ˜δ>0, for any (x,t)M×[0,T), such that

    limμlogσk(λ(u_)+μei)σl(λ(u_)+μei)>u_t+˜δ+logϕ(x,u_), 1in, (1.3)

    then u_ is said to be a parabolic C-subsolution of (1.1).

    Our main result is

    Theorem 1.3. Let (M,g) a compact Hermitian manifold and χ be a smooth real (1,1)-form on M. Assume there exists a parabolic C-subsolutionu_ for Eq (1.1) and

    tu_max{supM(logσk(λ(u0))σl(λ(u0))logϕ(x,u0)),0}, (1.4)
    ϕz(x,z)ϕ>cϕ>0, (1.5)

    where cϕ is a constant.Then there exits a unique smooth solution u(x,t) to (1.1) all timewith

    supxM(u0(x)u_(x,0))=0. (1.6)

    Moreover, u(x,t) isC convergent to a smooth function u, which solves Eq (1.2).

    The rest of this paper is organized as follows. In Section 2, we give some important lemmas and estimate on |ut(x,t)|. In Section 3, we prove C0 estimates of Eq (1.1) by the parabolic C-subsolution condition and the Alexandroff-Bakelman-Pucci maximum principle. In Section 4, using the parabolic C-subsolution condition, we establish the C2 estimate for Eq (1.1) by the method of Hou-Ma-Wu [10]. In Section 5, we adapt the blowup method of Dinew and Kolodziej [7] to obtain the gradient estimate. In Section 6, we give the proof of the long-time existence of the solution to the parabolic equation and its convergence, that is Theorem 1.3.

    In this section, we give some notations and lemmas. In holomorphic coordinates, we can set

    ω=1gi¯jdzid¯zj=1δijdzid¯zj,  χ=1χi¯jdzid¯zj,
    χu=1(χi¯j+ui¯j)dzid¯zj=1Xi¯jdzid¯zj,
    χu_=1(χi¯j+u_i¯j)dzid¯zj=1X_i¯jdzid¯zj.

    λ(u) and λ(u_) denote the eigenvalue set of {Xi¯j} and {X_i¯j} with respect to {gi¯j}, respectively. In local coordinates, (1.1) can be written as

    tu=logσk(λ(u))σl(λ(u))logϕ(x,u). (2.1)

    For simplicity, set

    F(λ(u))=logσk(λ(u))σl(λ(u)),

    then (2.1) is abbreviated as

    tu=F(λ(u))logϕ(x,u). (2.2)

    We use the following notation

    Fi¯j=FXi¯j,  F=iFi¯i,  Fi¯j,p¯q=2FXi¯jXp¯q.

    For any x0M, we can choose a local holomorphic coordinates such that the matrix {Xi¯j} is diagonal and X1¯1Xn¯n, then we have, at x0M,

    λ(u)=(λ1,,λn)=(X1¯1,,Xn¯n),
    Fi¯j=Fi¯iδij=(σk1(λ|i)σkσl1(λ|i)σl)δij,  F1¯1Fn¯n.

    To prove a priori C0-estimate for solution to Eq (1.1), we need the following variant of the Alexandroff-Bakelman-Pucci maximum principle, which is Proposition 10 in [20].

    Lemma 2.1. [20] Let v:B(1)R be a smooth function, which meets the conditionv(0)+ϵinfB(1)v, whereB(1) denotes the unit ball in Rn.Define the set

    Ω={xB(1):|Dv(x)|<ϵ2,   and  v(y)v(x)+Dv(x)(yx),yB(1)}.

    Then there exists a costant c0>0 such that

    c0ϵnΩdet(D2v).

    Next, we give an estimate on |ut(x,t)|.

    Lemma 2.2. Under the assumption of Theorem 1.3, let u(x,t) be a solution to (1.1). Then for any (x,t)M×[0,T), we have

    min{infMut(x,0),0}ut(x,t)max{supMut(x,0),0}. (2.3)

    Furthermore, there is a constant C>0 such that

    supM×[0,T)|tu(x,t)|supM|tu(x,0)|C,

    where C depends on H=|u0|C2(M) and |ϕ|C0(M×[H,H]).

    Proof. Differentiating (2.2) on both sides simultaneously at t, we obtain

    (ut)t=Fi¯jXi¯jtϕzϕut=Fi¯j(ut)i¯jϕzϕut. (2.4)

    Set uεt=utεt, ε>0. For any T(0,T), suppose uεt achieves its maximum Mt at (x0,t0)M×[0,T]. Without loss of generality, we may suppose Mt0. If t0>0, From the parabolic maximum principle and (2.4), we get

    0(uεt)tFi¯j(uεt)i¯j+ϕzϕuεt(ut)tεFi¯j(ut)i¯j+ϕzϕutεϕzϕt0εεϕzϕt0.

    This is obviously a contradiction, so t0=0 and

    supM×[0,T]uεt(x,t)=supMut(x,0),

    that is

    supM×[0,T]ut(x,t)=supM×[0,T](uεt(x,t)+εt)supMut(x,0)+εT.

    Letting ε0, we obtain

    supM×[0,T]ut(x,t)supMut(x,0).

    Since T(0,T) is arbitrary, we have

    supM×[0,T)ut(x,t)supMut(x,0). (2.5)

    Similarly, setting uεt=ut+εt, ε>0, we obtain

    infM×[0,T)ut(x,t)infMut(x,0). (2.6)

    (2.1) yields

    |ut(x,0)|=|logσk(λ(u0))σl(λ(u0))logϕ(x,u0)|C. (2.7)

    Combining (2.5)–(2.7), we complete the proof of Proposition 2.2.

    From the concavity of F(λ(u)) and the condition of the parabolic C-subsolution, we give the following lemma, which plays an important role in the estimation of C2.

    Lemma 2.3. Under the assumption of Theorem 1.3 and assuming that X1¯1Xn¯n, there exists two positive constantsN and θ such that we have either

    Fi¯i(u_i¯iui¯i)t(u_u)θ(1+F) (2.8)

    or

    F1¯1θN(1+F). (2.9)

    Proof. Since u_ is a parabolic C-subsolution to Eq (1.1), from Definition 1.2, there are uniform constants ˜δ>0 and N>0, such that

    logσk(λ(u_)+Ne1)σl(λ(u_)+Nue1)>u_t+˜δ+logϕ(x,u_). (2.10)

    If ϵ>0 is sufficiently small, it can be obtained from (2.10)

    logσk(λ(u_)ϵI+Ne1)σl(λ(u_)ϵI+Nue1)u_t+˜δ+logϕ(x,u_).

    Set λ=λ(u_)ϵI+Ne1, then

    F(λ)u_t+˜δ+logϕ(x,u_). (2.11)

    Using the concavity of F(λ(u)) gives

    Fi¯i(u_i¯iui¯i)=Fi¯i({X_i¯iXi¯i)=Fi¯i({X_i¯iϵδii+Nδi1Xi¯i)+ϵFNF1¯1F(λ)F(λ(u))+ϵFNF1¯1. (2.12)

    From Lemma 2.2 and (1.4), we obtain

    u_t(x,t)ut(x,t), (x,t)M×[0,T). (2.13)

    In addition, it can be obtained from the condition (1.6)

    u_(x,0)u(x,0), xM×[0,T). (2.14)

    (2.13) and (2.14) deduce that

    u_(x,t)u(x,t), (x,t)M×[0,T). (2.15)

    It follows from this that

    ϕ(x,u_)ϕ(x,u). (2.16)

    Combining (2.2), (2.11) and (2.16) gives that

    F(λ)F(λ(u))u_t(x,t)ut(x,t)+˜δ+logϕ(x,u_)logϕ(x,u)u_t(x,t)ut(x,t)+˜δ. (2.17)

    Put (2.17) into (2.12)

    Fi¯i(u_i¯iui¯i)u_t(x,t)ut(x,t)+˜δ+ϵFNF1¯1˜δ+ϵFNF1¯1.

    Let

    θ=min{˜δ2, ϵ2}.

    If F1¯1Nθ(1+F), Inequality (2.8) is obtained, otherwise Inequality (2.9) must be true.

    In this section, we prove the C0 estimates by the existence of the parabolic C-subsolution and the Alexandroff-Bakelman-Pucci maximum principle.

    Proposition 3.1. Under the assumption of Theorem1.3, let u(x,t) be a solution to (1.1). Then there exists a constant C>0 such that

    |u(x,t)|C0(M×[0,T))C,

    where C depends on |u0|C2(M) and |u_|C2(M×[0,T)).

    Proof. Combining (2.13), (2.14) and ϕ(x,z)z0 yields

    u_t(x,t)+logϕ(x,u_)ut(x,t)+logϕ(x,u). (3.1)

    Let's rewrite Eq (2.2) as

    F(λ(u))=tu+logϕ(x,u). (3.2)

    when fix t[0,T), Eq (3.2) is elliptic. From (3.1), we see that the parabolic C-subsolution u_(x,t) is a C-subsolution to Eq (3.2) in the elliptic sense. From (2.15), we have

    supM×[0,T)(uu_)=0.

    Our goal is to obtain a lower bound for L=infM×t(uu_). Note that λ(u)Γk, which implies that λ(u)Γ1, then Δ(uu_)˜C, where Δ is the complex Laplacian with respect to ω. According to Tosatti-Weinkove's method [22], we can prove that uu_L1(M) is bounded uniformly. Let G:M×MR be the associated Green's function, then, by Yau [28], there is a uniform constant K such that

    G(x,y)+K0, (x,y)M×M, and yMG(x,y)ωn(y)=0.

    Since

    supM×[0,T)(uu_)=0,

    then for fixed t[0,T) there exists a point x0M such that (uu_)(x0,t)=0. Thus

    (uu_)(x0,t)=M(uu_)dμyMG(x0,y)Δ(uu_)(y)ωn(y)=M(uu_)dμyM(G(x0,y)+K)Δ(uu_)(y)ωn(y)M(uu_)dμ+˜CKMωn,

    that is

    M(u_u)dμ=M|(uu_)|dμ˜CKMωn.

    Let us work in local coordinates, for which the infimum L is achieved at the origin, that is L=u(0,t)u_(0,t). We write B(1)={z:|z|<1}. Let v=uu_+ϵ|z|2, for a small ϵ>0. We have infv=L=v(0), and v(z)L+ϵ for zB(1). From Lemma 2.1, we get

    c0ϵ2nΩdet(D2v). (3.3)

    At the same time, if xΩ, then D2v(x)0 implies that

    ui¯j(x)u_i¯j(x)+ϵδij0.

    If ϵ is sufficiently small, then

    λ(u)λ(u_)δI+Γn.

    Set μ=λ(u)λ(u_). Since λ(u) satisfies Eq (3.2), then

    F(λ(u_)+μ)=tu+logϕ(x,u),  μ+δIΓn. (3.4)

    u_ is a C-subsolution to Eq (3.2) in the elliptic sense, so there is a uniform constant R>0, such that

    |μ|R,

    which means |vi¯j|C, for any xΩ. As in Blocki [1], for xΩ, we have D2v(x)0 and so

    D2v(x)22ndet(vi¯j)2C.

    From this and (3.3), we obtain

    c0ϵ2nΩdet(D2v)Cvol(Ω). (3.5)

    On the other hand, by the definition of Ω in Lemma 2.1, for xΩ, we get

    v(0)v(x)Dv(x)x>v(x)ϵ2,

    and so

    |v(x)|>|L+ϵ2|.

    It follows that

    M|v(x)|Ω|v(x)||L+ϵ2|vol(Ω). (3.6)

    Since uu_L1(M) is bounded uniformly, M|v(x)| is also bounded uniformly. If L is very large, Inequality (3.6) contradicts (3.5), which means that L has a lower bound. For any t[0,T), Inequality (3.1) holds, thus

    |u(x,t)|C0(M×[0,T))|L|+supM×[0,T)|u_|C.

    In this section, we prove that the second-order estimates are controlled by the square of the gradient estimate linearly. Our calculation is a parabolic version of that in Hou-Ma-Wu [10].

    Proposition 4.1. Under the assumption of Theorem1.3, let u(x,t) be a solution to (1.1). Then there exists a constant ˜C such that

    supM×[0,T)|1¯u|˜C(supM×[0,T)|u|2+1),

    where ˜C depends χ, ω, |ϕ|C2(M×[C,C]), |u_|C2(M×[0,T)), |tu_|C0(M×[0,T)) and |u0|C2(M).

    Proof. Let λ(u)=(λ1,,λn) and λ1 is the maximum eigenvalue. For any T<T, we consider the following function

    W(x,t)=logλ1+φ(|u(x,t)|2)+ψ(u(x,t)u_(x,t)), (x,t)M×[0,T], (4.1)

    where φ and ψ are determined later. We want to apply the maximum principle to the function W. Since the eigenvalues of the matrix {Xi¯j} with respect to ω need not be distinct at the point where W achieves its maximum, we will perturb {Xi¯j} following the technique of [20]. Let W achieve its maximum at (x0,t0)M×[0,T]. Near (x0,t0), we can choose local coordinates such that {Xi¯j} is diagonal with X1¯1Xn¯n, and λ(u)=(X1¯1,,Xn¯n). Let D be a diagonal matrix such that D11=0 and 0<D22<<Dnn are small, satisfying Dnn<2D22. Define the matrix ˜X=XD. At (x0,t0), ˜X has eigenvalues

    ˜λ1=λ1, ˜λi=λiDii, ni2.

    Since all the eigenvalues of ˜X are distinct, we can define near (x0,t0) the following smooth function

    ˜W=log˜λ1+φ(|u|2)+ψ(uu_), (4.2)

    where

    φ(s)=12log(1s2K), 0sK1,
    ψ(s)=Elog(1+s2L), L+1sL1,
    K=supM×[0,T]|u|2+1,
    L=supM×[0,T]|u|+supM×[0,T]|u_|+1,
    E=2L(C1+1),

    and C1>0 is to be chosen later. Direct calculation yields

    0<14Kφ12K,  φ=2(φ)2>0, (4.3)

    and

    C1+1ψ2(C1+1),  ψ4ϵ1ϵ(ψ)2, ϵ14E+1. (4.4)

    Without loss of generality, we can assume that λ1>1. From here on, all calculations are done at (x0,t0). From the maximum principle, calculating the first and second derivatives of the function ˜W gives

    0=˜Wi=˜λ1,iλ1+φ(|u|2)i+ψ(uu_)i,1in, (4.5)
    0˜Wi¯i=˜λ1,i¯iλ1˜λ1,i˜λ1,¯iλ21+φ(|u|2)i¯i+φ|(|u|2)i|2+ψ(uu_)i¯i+ψ|(uu_)i|2. (4.6)
    0˜Wt=˜λ1,tλ1+φ(|u|2)t+ψ(uu_)t. (4.7)

    Define

    L:=Fi¯j¯zjzit.

    Obviously,

    0L˜W=Llog˜λ1+Lφ(|u|2)+Lψ(uu_). (4.8)

    Next, we will estimate the terms in (4.8). Direct calculation shows that

    Llog˜λ1=Fi¯i˜λ1,i¯iλ1Fi¯i|˜λ1,i|2λ21˜λ1,tλ1. (4.9)

    According to Inequality (78) in [20], we have

    ˜λ1,i¯iXi¯i1¯12Re(Xi¯11¯T1i1)C0λ1, (4.10)

    where C0 depending χ, ω, |ϕ|C2(M×[C,C]), |u_|C2(M×[0,T)), |tu_|C0(M×[0,T)) and |u0|C2(M)). From here on, C0 can always absorb the constant it represents before, and can change from one line to the next, but it does not depend on the parameter we choose later. By calculating the covariant derivatives of (4.7) in the direction z1 and ¯z1, we obtain

    ut1=Fi¯iXi¯i1(logϕ)1(logϕ)uu1, (4.11)

    and

    ut1¯1=Fi¯j,p¯qXi¯j1Xp¯q¯1+Fi¯iXi¯i1¯1(logϕ)1¯1(logϕ)1uu¯1(logϕ)u¯1u1(logϕ)uu|u1|2(logϕ)uu1¯1. (4.12)

    Notice that

    X1¯1i=χ1¯1i+u1¯1i=(χ11iχi11+Tpi1χp¯1)+Xi¯11T1i1λ1, (4.13)

    therefore

    |X1¯1i|2|Xi¯11|22λ1Re(Xi¯11¯T1i1)+C0(λ21+|X1¯1i|). (4.14)

    Combining (4.14) with

    ˜λ1,i=X1¯1i(D11)i,

    gives

    Fi¯i|˜λ1,i|2λ21=Fi¯i|X1¯1i|2λ21+2λ21Fi¯iRe(X1¯1i(D11)¯i)Fi¯i|(D11)i|2λ2iFi¯i|X1¯1i|2λ21C0λ21Fi¯i|X1¯1i|C0FFi¯i|Xi¯11|2λ21+2λ1Fi¯iRe(Xi¯11¯T1i1)C0λ21Fi¯i|X1¯1i|C0F. (4.15)

    Let's set λ1K, otherwise the proof is completed. Putting 4.10–4.12 and (4.15) into (4.9) yields

    Llog˜λ1Fi¯j,p¯qXi¯j1Xp¯q¯1λ1Fi¯i|Xi¯11|2λ21C0λ1Fi¯i|X1¯1i|λ1C0F+(logϕ)1¯1+(logϕ)1uu¯1+(logϕ)u¯1u1+(logϕ)uu|u1|2(logϕ)uχ1¯1λ1Fi¯j,p¯qXi¯j1Xp¯q¯1λ1Fi¯i|Xi¯11|2λ21C0λ1Fi¯i|X1¯1i|λ1C0FC0 (4.16)

    A simple computation gives

    Lφ(|u|2)=φFi¯i(|u|2)i¯i+φFi¯i|(|u|2)i|2φ(|u|2)t. (4.17)

    Next, we estimate the formula (4.17). Differentiating Eq (2.2), we have

    (ut)p=Fi¯iXi¯ip(logϕ)p(logϕ)uup, (4.18)

    and

    (ut)¯p=Fi¯iXi¯i¯p(logϕ)¯p(logϕ)uu¯p. (4.19)

    It follows from (4.18) and (4.19) that

    t|u|2=putpu¯p+pupu¯t¯p=Fi¯ip(Xi¯ipu¯p+Xi¯i¯pup)p(logϕ)pu¯pp(logϕ)¯pup2p(logϕ)u|u|2. (4.20)

    By commuting derivatives, we have the identity

    Fi¯iXi¯ip=Fi¯iui¯ip+Fi¯iχi¯ip=Fi¯iupi¯iFi¯iTqpiuq¯iFi¯iuqR   qi¯ip+Fi¯iχi¯ip. (4.21)

    Direct calculation gives

    Fi¯i(|u|2)i¯i=pFi¯i(upi¯iu¯p+u¯pi¯iup)+pFi¯i(upiu¯p¯i+up¯iu¯pi). (4.22)

    It follows from (4.21) that

    Fi¯iupi¯iu¯pFi¯iXi¯ipu¯p=Fi¯iTqpiuq¯iu¯p+Fi¯iu¯puqR   qi¯ipFi¯iχi¯ipu¯pC0K12Fi¯iXi¯iC0K12FC0KF. (4.23)

    Noticing that

    Fi¯iXi¯i=σk1(λ|i)σkλiσl1(λ|i)σlλi=kl, (4.24)

    from this and (4.23), we obtain

    Fi¯iupi¯iu¯pFi¯iXi¯ipu¯pC0K12C0K12FC0KF. (4.25)

    In the same way, we can get

    Fi¯iu¯pi¯iupFi¯iXi¯i¯pupC0K12C0K12FC0KF. (4.26)

    Using (4.20)–(4.26) in (4.17), we have

    Lφ(|u|2)φFi¯i|(|u|2)i|2+φ(C0K12C0K12FC0KF)+φp((logϕ)pu¯p+(logϕ)¯pup+2(logϕ)u|u|2)+pFi¯i(|upi|2+|u¯pi|2)φFi¯i|(|u|2)i|2+pFi¯i(|upi|2+|u¯pi|2)C0C0F. (4.27)

    A simple calculation gives

    Lψ(uu_)=ψFi¯i|(uu_)i|2+ψ[Fi¯i(uu_)i¯it(uu_)]. (4.28)

    Substituting (4.27), (4.28) and (4.16) into (4.8),

    0Fi¯j,p¯qXi¯j1Xp¯q¯1λ1Fi¯i|Xi¯11|2λ21C0λ1Fi¯i|X1¯1i|λ1+φFi¯i|(|u|2)i|2+φpFi¯i(|upi|2+|u¯pi|2)C0C0F+ψFi¯i|(uu_)i|2+ψ[Fi¯i(uu_)i¯it(uu_)]. (4.29)

    Let δ>0 be a sufficiently small constant to be chosen later and satisfy

    δmin{11+4E,12}. (4.30)

    We separate the rest of the calculations into two cases.

    Case 1: λn<δλ1.

    Using(4.5), we find that

    Fi¯i|X1¯1i|2λ21=Fi¯i|φ(|u|2)i+ψ(uu_)i(D11)iλ1|22(φ)2Fi¯i|(|u|2)i|22Fi¯i|ψ(uu_)i(D11)iλ1|22(φ)2Fi¯i|(|u|2)i|2C0|ψ|2KFC0F. (4.31)

    From (4.13), we have

    |Xi¯11|2λ21|X1¯1i|2λ21+C0(1+|X1¯1i|λ1). (4.32)

    Combining (4.31) with (4.32), we conclude that

    Fi¯i|Xi¯11|2λ212(φ)2Fi¯i|(|u|2)i|2C0|ψ|2KFC0FC0Fi¯i|X1¯1i|λ1. (4.33)

    Note that the operator F is concave, which implies that

    Fi¯j,p¯qXi¯j1Xp¯q¯1λ10. (4.34)

    Applying (4.33) and (4.34) to (4.29) and using φ=2(φ)2 yield that

    0C0Fi¯i|X1¯1i|λ1C0λ1Fi¯i|X1¯1i|λ1+φpFi¯i(|upi|2+|u¯pi|2)C0|ψ|2KFC0FC0+ψFi¯i|(uu_)i|2+ψ[Fi¯i(uu_)i¯it(uu_)]. (4.35)

    Note that the fact

    |X1¯1i|λ1=|φ(upiu¯p+upu¯pi)ψ(uu_)i+(D11)iλ1|,

    It follows that

    C0Fi¯i|X1¯1i|λ1C0φK12Fi¯i(|upi|+|u¯pi|)+C0ψK12FC0F. (4.36)

    Using the following inequality

    K12(|upi|+|u¯pi|)14C0(|upi|2+|u¯pi|2)+C0K.

    deduces

    C0Fi¯i|X1¯1i|λ114φFi¯i(|upi|2+|u¯pi|2)+C0ψK12FC0F. (4.37)

    Note that λ1>1, we have

    C0λ1Fi¯i|X1¯1i|λ114φFi¯i(|upi|2+|u¯pi|2)+C0ψK12FC0F. (4.38)

    Since ψ>0, which implies that

    ψFi¯i|(uu_)i|20. (4.39)

    According to Lemma 2.3, there are at most two possibilities:

    (1) If (2.8) holds true, then

    ψ[Fi¯i(uu_)i¯it(uu_)]θ(1+F)|ψ|. (4.40)

    Substituting (4.37)—(4.40) into (4.35) and using φ14K yield that

    018KpFi¯i(|upi|2+|u¯pi|2)C0|ψ|2KFC0(F+1)+θ(1+F)|ψ|18KFi¯iλ2iC0(C1+1)2KF+θ(C1+1)(F+1)C0(F+1)δ2λ218nKFC0(C1+1)2KF+θ(C1+1)(F+1)C0(F+1). (4.41)

    We may set θC1C0. It follows from (4.41) that λ1˜CK.

    (2) If (2.9) holds true,

    F1¯1>θN(1+F). (4.42)

    According to ψ<0 and the concavity of the operator F, we have

    ψ[Fi¯i(uu_)i¯it(uu_)]=ψ[Fi¯i(Xi¯iX_i¯i)t(uu_)]ψ[F(χu)F(χu_)tu+tu_]=ψ[ϕ(x,u)+tu_F(χu_)]C0ψ. (4.43)

    Using (4.37)—(4.39) and (4.43) in (4.35), together with (4.42), we find that

    018KFi¯iλ2iC0|ψ|2KF+C0ψC0(F+1)θλ218NK(1+F)+δ2λ218nKFC0(C1+1)2KFC0(C1+1)C0(F+1). (4.44)

    Let λ1 be sufficiently large, so that

    θλ218NK(1+F)C0(C1+1)C0(F+1)0,

    It follows from (4.44)that λ1˜CK.

    Case 2: λnδλ1.

    Let

    I={i{1,,n}|Fi¯i>δ1F1¯1}.

    Let us first treat those indices which are not in I. Similar to (4.31), we obtain

    iIFi¯i|X1¯1i|2λ212(φ)2iIFi¯i|(|u|2)i|2C0Kδ|ψ|2F1¯1C0F. (4.45)

    Using (4.32) yields that

    iIFi¯i|Xi¯11|2λ212(φ)2iIFi¯i|(|u|2)i|2C0Kδ|ψ|2F1¯1C0iIFi¯i|X1¯1i|λ1C0F. (4.46)

    Since

    Fi¯1,1¯i=Fi¯iF1¯1X1¯1Xi¯i andλiλnδλ1,

    which implies that

    iIFi¯1,1¯i1δ1+δ1λ1iIFi¯i,

    It follows that

    Fi¯1,1¯i|Xi¯11|2λ11δ1+δiIFi¯i|Xi¯11|2λ21. (4.47)

    Recalling φ=2(φ)2 and 0<δ12, we obtain from (4.5) that

    iIφFi¯i|(|u|2)i|2=2iIFi¯i|Xi¯11λ1+ψ(uu_)i+χ11iχi11+Tpi1χp¯1(D11)iλ1|22iIFi¯i(δ|Xi¯11λ1|22δ1δ(ψ)2|(uu_)i|2C0)2δiIFi¯i|Xi¯11λ1|24δ1δ(ψ)2Fi¯i|(uu_)i|2C0F. (4.48)

    Notice that ψ4ϵ1ϵ(ψ)2 if ϵ=14E+1. Since 14E+1δ, we get that

    ψFi¯i|(uu_)i|24δ1δ(ψ)2Fi¯i|(uu_)i|20. (4.49)

    Take (4.46)–(4.49) into (4.29),

    0C0iIFi¯i|X1¯1i|λ1C0λ1Fi¯i|X1¯1i|λ1+φpFi¯i(|upi|2+|u¯pi|2)C0FC0C0Kδ|ψ|2F1¯1+ψ[Fi¯i(uu_)i¯it(uu_)]. (4.50)

    Similar to (4.37) and (4.38), by using the third term of (4.50) to absorb the first two terms of it, We get that

    018KpFi¯i(|upi|2+|u¯pi|2)C0Kδ|ψ|2F1¯1C0FC0+ψ[Fi¯i(uu_)i¯it(uu_)]18KpFi¯iλ2iC0Kδ|ψ|2F1¯1C0(F+1)+ψ[Fi¯i(uu_)i¯it(uu_)]. (4.51)

    According to Lemma2.3, there are at most two possibilities:

    (1) If (2.8) holds true, then

    ψ[Fi¯i(uu_)i¯it(uu_)]θ(1+F)|ψ|. (4.52)

    Put (4.52) into (4.51)

    018KpFi¯iλ2iC0Kδ|ψ|2F1¯1C0(F+1)+θ(1+F)|ψ|18KpF1¯1λ21C0Kδ(1+C1)2F1¯1C0(F+1)+θ(1+F)(1+C1). (4.53)

    Here, C1 is determined finally, such that

    θC1C0.

    It follows from (4.53) that

    λ1˜CK.

    (2) If (2.9) holds true,

    F1¯1>θN(1+F). (4.54)

    Substituting (4.43) into (4.51) and using (4.54) give that

    018Kλ21C0Kδ|(1+C1)|2NθC0(1+C1)NθC0 (4.55)

    It follows that

    λ1˜CK.

    To obtain the gradient estimates, we adapt the blow-up method of Dinew and Kolodziej [7] and reduce the problem to a Liouville type theorem which is proved in [7].

    Proposition 5.1. Under the assumption of Theorem 1.3, let u(x,t) be a solution to (1.1). Then there exists a uniform constant ˜C such that

    supM×[0,T)|u|˜C. (5.1)

    Proof. Suppose that the gradient estimate (5.1) does not hold. Then there exists a sequence (xm,tm)M×[0,T) with tmT such that

    supM×[0,tm]|u(x,t)|=|u(xm,tm)| andlimm|u(xm,tm)|=.

    After passing to a subsequence, we may assume that limmxm=x0M. We choose a coordinate chart {U,(z1,,zn)} at x0, which we identify with an open set in Cn, and such that ω(0)=β=1idzid¯zi. We may assume that the open set contains ¯B1(0) and m is sufficiently large so that zm=z(xm)B1(0). Define

    |u(xm,tm)|=Cm,  ˜um(z)=u(zCm,tm).

    From this and Proposition 4.1, we have

    supM|˜um|=1, supM|1 ¯˜um|˜C.

    This yields that ˜um is contained in the Hölder space C1,γ(Cn) with a uniform. Along with a standard application of Azela-Ascoli theorem, we may suppose ˜um has a limit ˜uC1,γ(Cn) with

    |˜u|+|˜u|<C and |˜u(0)|0, (5.2)

    in particular ˜u is not constant. On the other hand, similar to the method of Dinew and Kolodziej [7], we have

    [χu(zCm)]k[ω(zCm)]nk=etuϕm(zCm,u)[χu(zCm)]l[ω(zCm)]nl.

    Fixing z, we obtain

    C2(kl)m[O(1C2m)β+1 ¯˜um(z)]k[(1+O(|z|2C2m))β]nk=etumϕm(zCm,um)[O(1C2m)β+1 ¯˜um(z)]l[(1+O(|z|2C2m))β]nl. (5.3)

    Lemma 2.2 gives tu is bounded uniformly. Since

    ϕm(zCm,um)supM×[C,C]ϕ,

    which implies that ϕm(zCm,um) is bounded uniformly. Taking the limits on both sides of 5.3 by m yields that

    (1¯˜u)kβnk=0. (5.4)

    which is in the pluripotential sense. Moreover, a similar reasoning tells us that for any 1pk,

    (1¯˜u)pβnp0. (5.5)

    Then, (5.4) and (5.5) imply ˜u is a k-subharmonic. By a result of Blocki [1], ˜u is a maximal k-subharmonic function in Cn. Applying the Liouville theorem in [7], we find that ˜u is a constant, which contradicts with (5.2).

    In this section, we shall give a proof of the long-time existence to the flow and its convergence, that is Theorem 1.3.

    From Lemma 2.2, Proposition 3.1, Propositions 4.1 and 5.1, we conclude that Eq 1.1 is uniformly parabolic. Therefore by Evans-Krylov's regularity theory [8,13,19,27] for uniformly parabolic equation, we obtain higher order derivative estimates. By the a priori estimates which don't depend on time, one can prove that the short time existence on [0,T) extends to [0,), that is the smooth solution exists at all time t>0. After proving C estimates on [0,), we are able to show the convergence of the solution flow.

    Let v=eγtut, where 0<γ<cϕ. Commuting derivative of v with respect to t and using (2.4), we obtain

    vt=eγtutt+γv=γv+eγt(Fi¯juti¯jϕzϕut)=Fi¯jvi¯j+(γϕzϕ)v.

    Using the condition (1.5) yields γϕzϕ<0, According to the parabolic maximum principle, it follows that

    supM×[0,)|v(x,t)|supM|ut(x,0)|supM|F(λ(u0))ϕ(x,u0)|C,

    which means that |ut| decreases exponentially, in particular

    t(u+Cγeγt)0.

    According to Proposition 3.1, it follows that u+Cγeγt is bounded uniformly and decreasing in t. Thus it converges to a smooth function u. From the higher order prior estimates, we can see that the function u(x,t) converges smoothly to u. Letting t in Eq (2.1),

    σk(λ(u))σl(λ(u))=ϕ(x,u).

    In this paper, we have considered the parabolic Hessian quotient equation (1.1), in which the right hand side function ϕ depends on u. Firstly, we prove C0 estimates of Eq (1.1) by the parabolic C-subsolution condition and the Alexandroff-Bakelman-Pucci maximum principle. Secondly, we establish the C2 estimate for Eq (1.1) by using the parabolic C-subsolution condition. Thirdly, we obtain the gradient estimate by adapting the blowup method. Finally we give the proof of the long-time existence of the solution to the parabolic equation and its convergence. As an application, we show the solvability of a class of complex Hessian quotient equations, which generalizes the relevant results.

    This work was supported by the Natural Science Foundation of Anhui Province Education Department (Nos. KJ2021A0659, gxgnfx2018017); Quality Enginering Project of Anhui Province Education Department (Nos. 2018jyxm0491, 2019mooc205, 2020szsfkc0686); Science Research Project of Fuyang Normal University (No. 2021KYQD0011)

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.



    [1] Q. Yuan, K. Chen, Y. Yu, N. Q. K. Le, M. C. H. Chua, Prediction of anticancer peptides based on an ensemble model of deep learning and machine learning using ordinal positional encoding, Briefings Bioinf., 24 (2023), bbac630. https://doi.org/10.1093/bib/bbac630 doi: 10.1093/bib/bbac630
    [2] N. Q. K. Le, Potential of deep representative learning features to interpret the sequence information in proteomics, Proteomics, 22 (2022), 2100232. https://doi.org/10.1002/pmic.202100232 doi: 10.1002/pmic.202100232
    [3] P. Aggarwal, N. K. Mishra, B. Fatimah, P. Singh, A. Gupta, S. D. Joshi, COVID-19 image classification using deep learning: Advances, challenges and opportunities, Comput. Biol. Med., 144 (2022), 105350. https://doi.org/10.1016/j.compbiomed.2022.105350 doi: 10.1016/j.compbiomed.2022.105350
    [4] O. S. Albahri, A. A. Zaidan, A. S. Albahri, B. B. Zaidan, K. H. Abdulkareem, Z. T. Al-qaysi, et al., Systematic review of artificial intelligence techniques in the detection and classification of COVID-19 medical images in terms of evaluation and benchmarking: Taxonomy analysis, challenges, future solutions and methodological aspects, J. Infect. Public Health, 13 (2020), 1381–1396. https://doi.org/10.1016/j.jiph.2020.06.028 doi: 10.1016/j.jiph.2020.06.028
    [5] T. W. Cenggoro, B. Pardamean, A systematic literature review of machine learning application in COVID-19 medical image classification, Procedia Comput. Sci., 216 (2023), 749–756. https://doi.org/10.1016/j.procs.2022.12.192 doi: 10.1016/j.procs.2022.12.192
    [6] Y. Hu, K. Liu, K. Ho, D. Riviello, J. Brown, A. R. Chang, et al., A simpler machine learning model for acute kidney injury risk stratification in hospitalized patients, J. Clin. Med., 11 (2022), 5688. https://doi.org/10.3390/jcm11195688 doi: 10.3390/jcm11195688
    [7] E. Hussain, M. Hasan, M. A. Rahman, I. Lee, T. Tamanna, M. Z. Parvez, CoroDet: A deep learning based classification for COVID-19 detection using chest X-ray images, Chaos, Solitons Fractals, 142 (2021), 110495. https://doi.org/10.1016/j.chaos.2020.110495 doi: 10.1016/j.chaos.2020.110495
    [8] M. A. Ozdemir, G. D. Ozdemir, O. Guren, Classification of COVID-19 electrocardiograms by using hexaxial feature mapping and deep learning, BMC Med. Inf. Decis. Making, 21 (2021), 170. https://doi.org/10.1186/s12911-021-01521-x doi: 10.1186/s12911-021-01521-x
    [9] A. M. Ismael, A. Şengür, Deep learning approaches for COVID-19 detection based on chest X-ray images, Expert Syst. Appl., 164 (2021), 114054. https://doi.org/10.1016/j.eswa.2020.114054 doi: 10.1016/j.eswa.2020.114054
    [10] K. Kc, Z. Yin, M. Wu, Z. Wu, Evaluation of deep learning-based approaches for COVID-19 classification based on chest X-ray images, Signal, Image Video Process., 15 (2021), 959–966. https://doi.org/10.1007/s11760-020-01820-2 doi: 10.1007/s11760-020-01820-2
    [11] G. Muhammad, M. S. Hossain, COVID-19 and non-COVID-19 classification using multi-layers fusion from lung ultrasound images, Inf. Fusion, 72 (2021), 80–88. https://doi.org/10.1016/j.inffus.2021.02.013 doi: 10.1016/j.inffus.2021.02.013
    [12] X. Wang, X. Deng, Q. Fu, Q. Zhou, J. Feng, H. Ma, et al., A weakly-supervised framework for COVID-19 classification and lesion localization from chest CT, IEEE Trans. Med. Imaging, 39 (2020), 2615–2625. https://doi.org/10.1109/TMI.2020.2995965 doi: 10.1109/TMI.2020.2995965
    [13] M. Riaz, M. Bashir, I. Younas, Metaheuristics based COVID-19 detection using medical images: A review, Comput. Biol. Med., 2022 (2022), 105344. https://doi.org/10.1016/j.compbiomed.2022.105344 doi: 10.1016/j.compbiomed.2022.105344
    [14] D. Zhu, S. Wang, C. Zhou, S. Yan, J. Xue, Human memory optimization algorithm: A memory-inspired optimizer for global optimization problems, Expert Syst. Appl., 237 (2024), 121597. https://doi.org/10.1016/j.eswa.2023.121597 doi: 10.1016/j.eswa.2023.121597
    [15] D. Zhu, S. Wang, J. Shen, C. Zhou, T. Li, S. Yan, A multi-strategy particle swarm algorithm with exponential noise and fitness-distance balance method for low-altitude penetration in secure space, J. Comput. Sci., 74 (2023), 102149. https://doi.org/10.1016/j.jocs.2023.102149 doi: 10.1016/j.jocs.2023.102149
    [16] J. Kennedy, R. Eberhart, Particle swarm optimization, in Proceedings of ICNN'95-International Conference on Neural Networks, IEEE, (1995), 1942–1948. https://doi.org/10.1109/ICNN.1995.488968
    [17] K. V. Price, Differential evolution, in Handbook of Optimization: From Classical to Modern Approach, Springer, (2013), 187–214. https://doi.org/10.1007/978-3-642-30504-7_8
    [18] M. Dorigo, M. Birattari, T. Stutzle, Ant colony optimization, IEEE Comput. Intell. Mag., 1 (2006), 28–39. https://doi.org/10.1109/MCI.2006.329691
    [19] S. Mirjalili, S. M. Mirjalili, A. Lewis, Grey wolf optimizer, Adv. Eng. Software, 69 (2014), 46–61. https://doi.org/10.1016/j.advengsoft.2013.12.007
    [20] S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Software, 95 (2016), 51–67. https://doi.org/10.1016/j.advengsoft.2016.01.008 doi: 10.1016/j.advengsoft.2016.01.008
    [21] J. Xue, B. Shen, A novel swarm intelligence optimization approach: Sparrow search algorithm, Syst. Sci. Control Eng., 8 (2020), 22–34. https://doi.org/10.1080/21642583.2019.1708830 doi: 10.1080/21642583.2019.1708830
    [22] A. A. Heidari, S. Mirjalili, H. Faris, I. Aljarah, M. Mafarja, H. Chen, Harris hawks optimization: Algorithm and applications, Future Gener. Comput. Syst., 97 (2019), 849–872. https://doi.org/10.1016/j.future.2019.02.028 doi: 10.1016/j.future.2019.02.028
    [23] W. Zhao, Z. Zhang, L. Wang, Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications, Eng. Appl. Artif. Intell., 87 (2020), 103300. https://doi.org/10.1016/j.engappai.2019.103300 doi: 10.1016/j.engappai.2019.103300
    [24] M. Abdel-Basset, D. El-Shahat, M. Jameel, M. Abouhawwash, Exponential distribution optimizer (EDO): A novel math-inspired algorithm for global optimization and engineering problems, Artif. Intell. Rev., 56 (2023), 9329–9400. https://doi.org/10.1007/s10462-023-10403-9 doi: 10.1007/s10462-023-10403-9
    [25] A. Dixit, A. Mani, R. Bansal, CoV2-Detect-Net: Design of COVID-19 prediction model based on hybrid DE-PSO with SVM using chest X-ray images, Inf. Sci., 571 (2021), 676–692. https://doi.org/10.1016/j.ins.2021.03.062 doi: 10.1016/j.ins.2021.03.062
    [26] D. A. D. Júnior, L. B. da Cruz, J. O. B. Diniz, G. L. F. da Silva, G. B. Junior, A. C. Silva, et al., Automatic method for classifying COVID-19 patients based on chest X-ray images, using deep features and PSO-optimized XGBoost, Expert Syst. Appl., 183 (2021), 115452. https://doi.org/10.1016/j.eswa.2021.115452 doi: 10.1016/j.eswa.2021.115452
    [27] M. A. A. Albadr, S. Tiun, M. Ayob, F. T. AL-Dhief, Particle swarm optimization-based extreme learning machine for COVID-19 detection, Cognit. Comput., 2022 (2022), 1–16. https://doi.org/10.1007/s12559-022-10063-x doi: 10.1007/s12559-022-10063-x
    [28] M. A. Elaziz, K. M. Hosny, A. Salah, M. M. Darwish, S. Lu, A. T. Sahlol, New machine learning method for image-based diagnosis of COVID-19, PLoS One, 15 (2020), e0235187. https://doi.org/10.1371/journal.pone.0235187 \newpage doi: 10.1371/journal.pone.0235187
    [29] E. S. M. El-Kenawy, S. Mirjalili, A. Ibrahim, M. Alrahmawy, M. El-Said, R. M. Zaki, et al., Advanced meta-heuristics, convolutional neural networks, and feature selectors for efficient COVID-19 X-ray chest image classification, IEEE Access, 9 (2021), 36019–36037. https://doi.org/10.1109/ACCESS.2021.3061058 doi: 10.1109/ACCESS.2021.3061058
    [30] S. Pathan, P. C. Siddalingaswamy, P. Kumar, M. M. M. Pai, T. Ali, U. R. Acharya, Novel ensemble of optimized CNN and dynamic selection techniques for accurate COVID-19 screening using chest CT images, Comput. Biol. Med., 137 (2021), 104835. https://doi.org/10.1016/j.compbiomed.2021.104835 doi: 10.1016/j.compbiomed.2021.104835
    [31] A. Basu, K. H. Sheikh, E. Cuevas, R. Sarkar, COVID-19 detection from CT scans using a two-stage framework, Expert Syst. Appl., 193 (2022), 116377. https://doi.org/10.1016/j.eswa.2021.116377 doi: 10.1016/j.eswa.2021.116377
    [32] M. H. Nadimi-Shahraki, H. Zamani, S. Mirjalili, Enhanced whale optimization algorithm for medical feature selection: A COVID-19 case study, Comput. Biol. Med., 148 (2022), 105858. https://doi.org/10.1016/j.compbiomed.2022.105858 doi: 10.1016/j.compbiomed.2022.105858
    [33] S. Elghamrawy, A. E. Hassanien, Diagnosis and prediction model for COVID-19 patient's response to treatment based on convolutional neural networks and whale optimization algorithm using CT images, preprint, MedRxiv, 2020. https://doi.org/10.1101/2020.04.16.20063990
    [34] T. Goel, R. Murugan, S. Mirjalili, D. K. Chakrabartty, OptCoNet: An optimized convolutional neural network for an automatic diagnosis of COVID-19, Appl. Intell., 51 (2021), 1351–1366. https://doi.org/10.1007/s10489-020-01904-z doi: 10.1007/s10489-020-01904-z
    [35] T. Hu, M. Khishe, M. Mohammadi, G. Parvizi, S. H. T. Karim, T. A. Rashid, Real-time COVID-19 diagnosis from X-Ray images using deep CNN and extreme learning machines stabilized by chimp optimization algorithm, Biomed. Signal Process. Control, 68 (2021), 102764. https://doi.org/10.1016/j.bspc.2021.102764 doi: 10.1016/j.bspc.2021.102764
    [36] D. Singh, V. Kumar, M. Kaur, Classification of COVID-19 patients from chest CT images using multi-objective differential evolution–based convolutional neural networks, Eur. J. Clin. Microbiol. Infect. Dis., 39 (2020), 1379–1389. https://doi.org/10.1007/s10096-020-03901-z doi: 10.1007/s10096-020-03901-z
    [37] M. S. Iraji, M. Feizi-Derakhshi, J. Tanha, Deep learning for COVID-19 diagnosis based feature selection using binary differential evolution algorithm, preprint, 2021, arXiv: PPR343118.
    [38] A. M. Sahan, A. S. Al-Itbi, J. S. Hameed, COVID-19 detection based on deep learning and artificial bee colony, Periodicals Eng. Nat. Sci., 9 (2021), 29–36. http://doi.org/10.21533/pen.v9i1.1774 doi: 10.21533/pen.v9i1.1774
    [39] F. Sadeghi, O. Rostami, M. K. Yi, A deep learning approach for detecting COVID-19 using the chest X-ray images, CMC-Comput. Mater. Continua, 75 (2023), 751–768.
    [40] H. M. Balaha, E. M. El-Gendy, M. M. Saafan, CovH2SD: A COVID-19 detection approach based on Harris Hawks Optimization and stacked deep learning, Expert Syst. Appl., 186 (2021), 115805. https://doi.org/10.1016/j.eswa.2021.115805 doi: 10.1016/j.eswa.2021.115805
    [41] W. M. Bahgat, H. M. Balaha, Y. AbdulAzeem, M. M. Badawy, An optimized transfer learning-based approach for automatic diagnosis of COVID-19 from chest x-ray images, PeerJ Comput. Sci., 7 (2021), e555. https://doi.org/10.7717/peerj-cs.555 doi: 10.7717/peerj-cs.555
    [42] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, (2016), 770–778.
    [43] L. Wen, X. Li, L. Gao, A transfer convolutional neural network for fault diagnosis based on ResNet-50, Neural Comput. Appl., 32 (2020), 6111–6124. https://doi.org/10.1007/s00521-019-04097-w doi: 10.1007/s00521-019-04097-w
    [44] J. Yang, J. Yu, C. Huang, Adaptive multistrategy ensemble particle swarm optimization with Signal-to-Noise ratio distance metric, Inf. Sci., 612 (2022), 1066–1094. https://doi.org/10.1016/j.ins.2022.07.165 doi: 10.1016/j.ins.2022.07.165
    [45] D. Zhu, Z. Huang, S. Liao, C. Zhou, S. Yan, G. Chen, Improved bare bones particle swarm optimization for DNA sequence design, IEEE Trans. Nanobiosci., 22 (2022), 603–613. https://doi.org/10.1109/TNB.2022.3220795 doi: 10.1109/TNB.2022.3220795
    [46] H. T. Kahraman, S. Aras, E. Gedikli, Fitness-distance balance (FDB): A new selection method for meta-heuristic search algorithms, Knowledge-Based Syst., 190 (2020), 105169. https://doi.org/10.1016/j.knosys.2019.105169 doi: 10.1016/j.knosys.2019.105169
    [47] R. Zheng, A. G. Hussien, R. Qaddoura, H. Jia, L. Abualigah, S. Wang, A multi-strategy enhanced African vultures optimization algorithm for global optimization problems, J. Comput. Des. Eng., 10 (2023), 329–356. https://doi.org/10.1093/jcde/qwac135 doi: 10.1093/jcde/qwac135
    [48] D. Zhu, S. Wang, C. Zhou, S. Yan, Manta ray foraging optimization based on mechanics game and progressive learning for multiple optimization problems, Appl. Soft Comput., 145 (2023), 110561. https://doi.org/10.1016/j.asoc.2023.110561 doi: 10.1016/j.asoc.2023.110561
    [49] R. Murugan, T. Goel, S. Mirjalili, D. K. Chakrabartty, WOANet: Whale optimized deep neural network for the classification of COVID-19 from radiography images, Biocybern. Biomed. Eng., 41 (2021), 1702–1718. https://doi.org/10.1016/j.bbe.2021.10.004 doi: 10.1016/j.bbe.2021.10.004
    [50] C. Szegedy, W. Liu, Y. Jia, Going deeper with convolutions, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, (2015), 1–9.
    [51] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, preprint, arXiv: 1409.1556.
    [52] L. Kong, J. Cheng, Classification and detection of COVID-19 X-Ray images based on DenseNet and VGG16 feature fusion, Biomed. Signal Process. Control, 77 (2022), 103772. https://doi.org/10.1016/j.bspc.2022.103772 doi: 10.1016/j.bspc.2022.103772
    [53] A. Karacı, VGGCOV19-NET: Automatic detection of COVID-19 cases from X-ray images using modified VGG19 CNN architecture and YOLO algorithm, Neural Comput. Appl., 34 (2022), 8253–8274. https://doi.org/10.1007/s00521-022-06918-x doi: 10.1007/s00521-022-06918-x
    [54] A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, Commun. ACM, 60 (2017), 84–90. https://doi.org/10.1145/3065386 doi: 10.1145/3065386
    [55] G. Muhammad, M. S. Hossain, COVID-19 and non-COVID-19 classification using multi-layers fusion from lung ultrasound images, Inf. Fusion, 72 (2021), 80–88. https://doi.org/10.1016/j.inffus.2021.02.013 doi: 10.1016/j.inffus.2021.02.013
    [56] I. Bankman, Handbook of Medical Image Processing and Analysis, Elsevier, 2008.
    [57] Y. Song, S. Zheng, L. Li, X. Zhang, X. Zhang, Z. Huang, et al., Deep learning enables accurate diagnosis of novel coronavirus (COVID-19) with CT images, IEEE/ACM Trans. Comput. Biol. Bioinf., 18 (2021), 2775–2780. https://doi.org/10.1109/TCBB.2021.3065361 doi: 10.1109/TCBB.2021.3065361
    [58] Y. Pathak, P. K. Shukla, K. V. Arya, Deep bidirectional classification model for COVID-19 disease infected patients, IEEE/ACM Trans. Comput. Biol. Bioinf., 18 (2020), 1234–1241. https://doi.org/10.1109/TCBB.2020.3009859 doi: 10.1109/TCBB.2020.3009859
    [59] Y. Pathak, P. K. Shukla, A. Tiwari, S. Stalin, S. Singh, P. K. Shukla, Deep transfer learning based classification model for COVID-19 disease, IRBM, 43 (2022), 87–92. https://doi.org/10.1016/j.irbm.2020.05.003 doi: 10.1016/j.irbm.2020.05.003
    [60] X. Fan, X. Feng, Y. Dong, H. Hou, COVID-19 CT image recognition algorithm based on transformer and CNN, Displays, 72 (2022), 102150. https://doi.org/10.1016/j.displa.2022.102150 doi: 10.1016/j.displa.2022.102150
    [61] A. S. Ebenezer, S. D. Kanmani, M. Sivakumar, S. J. Priya, Effect of image transformation on EfficientNet model for COVID-19 CT image classification, Mater. Today Proc., 51 (2022), 2512–2519. https://doi.org/10.1016/j.matpr.2021.12.121 doi: 10.1016/j.matpr.2021.12.121
    [62] N. S. Shaik, T. K. Cherukuri, Transfer learning based novel ensemble classifier for COVID-19 detection from chest CT-scans, Comput. Biol. Med., 141 (2022), 105127. https://doi.org/10.1016/j.compbiomed.2021.105127 doi: 10.1016/j.compbiomed.2021.105127
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1420) PDF downloads(55) Cited by(0)

Figures and Tables

Figures(9)  /  Tables(5)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog