Processing math: 100%
Research article Special Issues

Survival prediction among heart patients using machine learning techniques

  • Cardiovascular diseases are regarded as the most common reason for worldwide deaths. As per World Health Organization, nearly 17.9 million people die of heart-related diseases each year. The high shares of cardiovascular-related diseases in total worldwide deaths motivated researchers to focus on ways to reduce the numbers. In this regard, several works focused on the development of machine learning techniques/algorithms for early detection, diagnosis, and subsequent treatment of cardiovascular-related diseases. These works focused on a variety of issues such as finding important features to effectively predict the occurrence of heart-related diseases to calculate the survival probability. This research contributes to the body of literature by selecting a standard well defined, and well-curated dataset as well as a set of standard benchmark algorithms to independently verify their performance based on a set of different performance evaluation metrics. From our experimental evaluation, it was observed that decision tree is the best performing algorithm in comparison to logistic regression, support vector machines, and artificial neural networks. Decision trees achieved 14% better accuracy than the average performance of the remaining techniques. In contrast to other studies, this research observed that artificial neural networks are not as competitive as the decision tree or support vector machine.

    Citation: Abdulwahab Ali Almazroi. Survival prediction among heart patients using machine learning techniques[J]. Mathematical Biosciences and Engineering, 2022, 19(1): 134-145. doi: 10.3934/mbe.2022007

    Related Papers:

    [1] Lili Jiang, Sirong Chen, Yuanhui Wu, Da Zhou, Lihua Duan . Prediction of coronary heart disease in gout patients using machine learning models. Mathematical Biosciences and Engineering, 2023, 20(3): 4574-4591. doi: 10.3934/mbe.2023212
    [2] Zhaobin Qiu, Ying Qiao, Wanyuan Shi, Xiaoqian Liu . A robust framework for enhancing cardiovascular disease risk prediction using an optimized category boosting model. Mathematical Biosciences and Engineering, 2024, 21(2): 2943-2969. doi: 10.3934/mbe.2024131
    [3] Anastasia-Maria Leventi-Peetz, Kai Weber . Probabilistic machine learning for breast cancer classification. Mathematical Biosciences and Engineering, 2023, 20(1): 624-655. doi: 10.3934/mbe.2023029
    [4] Javier Antonio Ballesteros-Ricaurte, Ramon Fabregat, Angela Carrillo-Ramos, Carlos Parra, Andrés Moreno . Artificial neural networks to predict the presence of Neosporosis in cattle. Mathematical Biosciences and Engineering, 2025, 22(5): 1140-1158. doi: 10.3934/mbe.2025041
    [5] Huili Yang, Wangren Qiu, Zi Liu . Anoikis-related mRNA-lncRNA and DNA methylation profiles for overall survival prediction in breast cancer patients. Mathematical Biosciences and Engineering, 2024, 21(1): 1590-1609. doi: 10.3934/mbe.2024069
    [6] Natalya Shakhovska, Vitaliy Yakovyna, Valentyna Chopyak . A new hybrid ensemble machine-learning model for severity risk assessment and post-COVID prediction system. Mathematical Biosciences and Engineering, 2022, 19(6): 6102-6123. doi: 10.3934/mbe.2022285
    [7] Vinh Huy Chau . Powerlifting score prediction using a machine learning method. Mathematical Biosciences and Engineering, 2021, 18(2): 1040-1050. doi: 10.3934/mbe.2021056
    [8] Giuseppe Ciaburro . Machine fault detection methods based on machine learning algorithms: A review. Mathematical Biosciences and Engineering, 2022, 19(11): 11453-11490. doi: 10.3934/mbe.2022534
    [9] Abhishek Savaliya, Rutvij H. Jhaveri, Qin Xin, Saad Alqithami, Sagar Ramani, Tariq Ahamed Ahanger . Securing industrial communication with software-defined networking. Mathematical Biosciences and Engineering, 2021, 18(6): 8298-8313. doi: 10.3934/mbe.2021411
    [10] Wang Cai, Jianzhuang Wang, Longchao Cao, Gaoyang Mi, Leshi Shu, Qi Zhou, Ping Jiang . Predicting the weld width from high-speed successive images of the weld zone using different machine learning algorithms during laser welding. Mathematical Biosciences and Engineering, 2019, 16(5): 5595-5612. doi: 10.3934/mbe.2019278
  • Cardiovascular diseases are regarded as the most common reason for worldwide deaths. As per World Health Organization, nearly 17.9 million people die of heart-related diseases each year. The high shares of cardiovascular-related diseases in total worldwide deaths motivated researchers to focus on ways to reduce the numbers. In this regard, several works focused on the development of machine learning techniques/algorithms for early detection, diagnosis, and subsequent treatment of cardiovascular-related diseases. These works focused on a variety of issues such as finding important features to effectively predict the occurrence of heart-related diseases to calculate the survival probability. This research contributes to the body of literature by selecting a standard well defined, and well-curated dataset as well as a set of standard benchmark algorithms to independently verify their performance based on a set of different performance evaluation metrics. From our experimental evaluation, it was observed that decision tree is the best performing algorithm in comparison to logistic regression, support vector machines, and artificial neural networks. Decision trees achieved 14% better accuracy than the average performance of the remaining techniques. In contrast to other studies, this research observed that artificial neural networks are not as competitive as the decision tree or support vector machine.



    The study of nonlinear wave processes in real media with dispersion, despite the significant progress in this area in recent years, for example, [1,2,3] and numerous references in these works are still relevant. This, in particular, concerns the dynamics of oscillations in cases where high-energy particle fluxes occur in the medium, which significantly change such parameters of propagating wave structures, such as their phase velocity, amplitude and characteristic length. In recent and earlier years, a fairly large number of works have been devoted to studies of this kind of relativistic effects (see [12,13,14]).

    Recently, the great interest on the KP equation has led to the construction and the study of many extensions to the KP equation. These new extended models propelled greatly the research that directly resulted in many promising findings and gave an insight into some novel physical features of scientific and engineering applications. Moreover, lump solutions, and interaction solutions between lump waves and solitons, have attracted a great amount of attentions aiming to make more progress in solitary waves theory. Lump solutions, have been widely studied by researchers for their significant features in physics and many other nonlinear fields[18,19,20].

    Let u=u(x,y,t),(x,y,t)R3 and α4. We consider the initial value problem for the generalized Kadomtsev-Petviashvili I equation,

    {tu+|Dx|αxu+1x2yu+uxu=0u(x,y,0)=f(x,y). (1.1)

    with

    Dαxu(x,y,t)=1(2π)32R3|ξ|αFu(ξ,μ,τ)eixξ+iyμ+itτdξdμdτ

    This equation belongs to the class of Kadomtsev-Petviashvili equations, which are models for the propagation of long dispersive nonlinear waves which are essentially unidirectional and have weak transverse effects. Due to the asymmetric nature of the equation with respect to the spatial derivatives, it is natural to consider the Cauchy problem for (1.1) with initial data in the anisotropic Sobolev spaces Hs1,s2(R2), defined by the norm

    uHs1,s2(R2)=(ξ2s1η2s2|ˆu(ξ,η)|2dξdη)1/2.

    Many authors have investigated the Cauchy problem for Kadomtsev-Petviashvili equations as in, for instance [4,8,16]. Yan et al. [17] established the local well-posedness of the Cauchy problem for the Kadomtsev-Petviashvili I equation in anisotropic Sobolev spaces Hs1,s2(R) with s1>α14, s20 with α4 and globally well-posed in Hs1,0(R) with s1>(α1)(3α4)4(5α+3) if 4α5 also proved that the Cauchy problem is globally well-posed in Hs1,0(R) with s1>α(3α4)4(5α+4) if α>5. The authors in [8] proposed the problem

    {tu5xu+1x2yu+uxu=0,u(x,y,0)=u0(x,y), (1.2)

    and proved that it is globally well-posed for given data in an anisotropic Gevrey space Gσ1,σ2(R2),σ1,σ20, with respect to the norm

    fGσ1,σ2(R2)=(R2e2σ1|ξ|e2σ2|η||ˆf(ξ,η)|2dξdη)1/2.

    With initial data in anisotropic Gevrey space

    Gσ1,σ2,κs1,s2(R2)=Gσ1,σ2,κs1,s2,σ1,σ20,s1,s2R,

    and κ1, we will consider the problem (1.1). The spaces Gσ1,σ2,κs1,s2 can be defined as the completion of the Schwartz functions with respect to the norm

    fGσ1,σ2,κs1,s2(R2)=(R2e2σ1|ξ|1κe2σ2|η|1κμ2(s1,s2)|ˆf(ξ,η)|2dξdη)1/2,

    where

    μ(s1,s2)=ξs1ηs2.

    In addition to the holomorphic extension property, Gevrey spaces satisfy the embeddings Gσ1,σ2s1Gσ1,σ2s1 for s1,s1R and σi<σi where Gσ1,σ2,1s1,0=Gσ1,σ2s1, which follow from the corresponding estimates

    fGσ1,σ2s1fGσ1,σ2s1.

    The main aim to consider initial data in these spaces is because of the Paley-Wiener Theorem.

    Proposition 1.1. Let σ1>0, sR. Then fGσ1x(R) if and only if it is the restriction to the to the real line of a function F which is holomorphic in the strip {x+iyC: |y|<σ}, and satisfies

    sup|y|<σ1F(x+iy)Hsx<.

    Notation

    We will also need the full space time Fourier transform denoted by

    ˆf(ξ,η,τ)=R3f(x,y,t)ei(xξ+yη+tτ) dxdydt.

    In both cases, we will denote the corresponding inverse transform of a function

    f=f(ξ,η) or f=f(ξ,η,τ) by F1(f).

    To simplify the notation, we introduce some operators. We first introduce the operator Aσ1,σ2κ, which we define as

    Aσ1,σ2κf=F1(eσ1|ξ|1κeσ2|η|1κˆf). (1.3)

    Then, we may then define another useful operator

    Nσ1,σ2κ(f)=x[(Aσ1,σ2κf)2Aσ1,σ2κ(f2)]. (1.4)

    For xRn, we denote x=(1+|x|2)1/2. Finally, we write ab if there exists a constant C>0 such that aCb, and ab if aba. If the constant C depends on some quantity q, we denote this by aqb.

    Function spaces

    Since our proofs rely heavily on the theory developed by Yan et al., let us state the function spaces they used explicitly, so that we can state their useful properties which we will exploit in our modifications of their spaces. The main function spaces they used are the so-called anisotropic Bourgain spaces, adapted to the generalized Kadomtsev-Petviashvili I, whose norm is given by

    uXs1,s2,b=(R3θ2(s1,s2,b)|ˆu(ξ,η,τ)|2dξdηdτ)12,

    where

    θ(s1,s2,b)=ξs1ηs2τ+m(ξ,η)b,

    with m(ξ,η)=ξ|ξ|α+η2ξ.

    Furthermore, we will also need a hybrid of the anlytic Gevrey and anisotropic Bourgain spaces, designated Xσ1,σ2,κs1,s2,b(R3) and defined by the standard

    uXσ1,σ2,κs1,s2,b=(R3e2σ1|ξ|1κe2σ2|η|1κθ2(s1,s2,b)|ˆu(ξ,η,τ)|2dξdηdτ)12,

    It is well-known that these spaces satisfy the embedding

    Xσ1,σ2,κs1,s2,bC(R;Gσ1,σ2,κs1,s2(R2)).

    Thus, solutions constructed in Xσ1,σ2,κs1,s2,b belong to the natural solution space.

    When considering local solutions, it is useful to consider localized versions of these spaces. For a time interval I and a Banach space Y, we define the localized space Y(I) by the norm

    uY(I)=inf{vY: v=u on   I}.

    The first result related to the short-term persistence of analyticity of solutions is given in the next Theorem.

    Theorem 2.1. Let s1>α14,s20, α4, σ10, σ20 and κ1. Then for all initial data fGσ1,σ2,κs1,s2 and |ξ|1ˆf(ξ,μ)L2(R2), there exists δ=δ(fGσ1,σ2,κs1,s2)>0 and a unique solution u of (1.1) on the time interval [0,δ] such that

    uC([0,δ];Gσ1,σ2,κs1,s2(R2)).

    Moreover the solution depends continuously on the data f. In particular, the time of existence can be chosen to satisfy

    δ=c0(1+fGσ1,σ2,κs1,s2)γ,

    for some constants c0>0 and γ>1. Moreover, the solution u satisfies

    supt[0,δ]u(t)Gσ1,σ2,κs1,s24fGσ1,σ2,κs1,s2.

    The second main result concerns the evolution of the radius of analyticity for the x-direction is given in the next Theorem. Here

    Xσ1,0,1s1,0,b=Xσ1,0s1,b,s2,σ2=0 and κ=1.

    Theorem 2.2. Let σ1>0, s1>α14,α=4,6,8,... and assume that fGσ1,0s1, |ξ|1ˆf(ξ,μ)L2(R2). Then the solution u given by Theorem 2.1 extends globally in time, and for any T>0, we have

    uC([0,T],Gσ1(T),0s1(R2))withσ1(T)=min{σ1,CTρ},

    with ρ=4α1+ε for ε>0 when α=4 and ρ=1 when α=6,8,10,... and the constant C is a positive.

    The method used here for proving lower bounds on the radius of analyticity was introduced in [15] in the study of the non-periodic KdV equation. It was applied to the the higher order nonlinear dispersive equation in [9] and the system of mKdV equation in [10].

    Our last aim is to show the regularity of the solution in the time. A non-periodic function ϕ(x) is the Gevrey class of order κ i.e, ϕ(x)Gκ, if there exists a constant C>0 such that

    |kxϕ(x)|Ck+1(k!)κ   k=0,1,2,. (2.1)

    Here we will show that for x,yR, for every t[0,δ] and j,l,n{0,1,2,}, there exist C>0 such that,

    |jtlxnyu(x,y,t)|Cj+l+n+1(j!)(α+1)κ(l!)κ(n!)κ. (2.2)

    i.e, u(,,t)Gσ(R)×Gσ(R) in x,y and u(x,y,)G(α+1)κ([0,δ]) in time variable.

    Theorem 2.3. Let s1>α14,s20, α4, σ10, σ20 and κ1.

    If fGσ1,σ2,κs1,s2 then the solution

    uC([0,δ],Gσ1,σ2,κs1,s2),

    given by Theorem 2.1, belongs to the Gevrey class G(α+1)κ in time variable.

    Corollary 2.4. Let σ1>0,s1>α14,α=4,6,8,.... If fGσ1,0s1 then the solution

    uC([0,T],Gσ1(T),0s1(R2)),

    given by Theorem 2.2, belongs to the Gevrey class G(α+1) in time variable.

    The rest of the paper is organized as follows: In section 3, we present all the auxiliary estimates that will be employed in the remaining sections. We prove Theorem 2.1 in subsection 4.1 using the standard contraction method and Theorem 2.2 in subsection 4.2. Finally, in section 5, we prove G(α+1) regularity in time.

    To begin with, let us consider the related linear problem

    tu+|Dx|αxu+1x2yu=F,u(0)=f.

    By Duhamel's principle the solution can be written as

    u(t)=S(t)f12t0S(tt)F(t)dt, (3.1)

    where

    ^S(t)f(ξ,η)=eitm(ξ,η)ˆf(ξ,η).

    We localize it in t by using a cut-off function satisfying ψC0(R), with ψ=1 in [1,1] and suppψ[2,2].

    We consider the operator Φ given by

    Φ(u)=ψ(t)S(t)fψδ(t)2t0S(tt)(xu2(t))dt, (3.2)

    where ψδ(t)=ψ(tδ). To this operator, we apply the following estimates.

    Lemma 3.1. (Linear estimate) Let s1,s2R,12<b0bb+1,σ10,σ20,κ1 and δ(0,1). Then

    ψ(t)S(t)fXσ1,σ2,κs1,s2,bCfGσ1,σ2,κs1,s2, (3.3)
    ψδ(t)t0S(tt)F(x,y,t)dtXσ1,σ2,κs1,s2,bCδ1b+bFXσ1,σ2,κs1,s2,b. (3.4)

    Proof. The proofs of (3.3) and (3.4) for σ1=σ2=0 can be found in Lemma 2.1 of [17]. These inequalities clearly remain valid for σ1,σ2>0, as one merely has to replace f by Aσ1,σ2κf, F by Aσ1,σ2κF.

    The final preliminary fact we must state is the following bilinear estimate, which is Lemma 3.1 of [17].

    Lemma 3.2. (Bilinear estimate in Bourgain space.)

    Let s1α14+4αϵ,s20, α4, b=12+ϵ and b=12+2ϵ. Then, we have

    x(u1u2)Xs1,s2,bu1Xs1,s2,bu2Xs1,s2,b.

    To this result, we apply the following Lemma, which is a corollary of Lemma 3.2.

    Lemma 3.3. (Bilinear estimate in Gevrey-Bourgain space.)

    Let s1>α14+4αϵ,s20,α4,σ10,σ20,κ1,b=12+ϵ and b=12+2ϵ. Then, we have

    x(u1u2)Xσ1,σ2,κs1,s2,bu1Xσ1,σ2,κs1,s2,bu2Xσ1,σ2,κs1,s2,b.

    Proof. It is not hard to see that

    e2(σ1|ξ|1κ+σ2|η|1κ)|^u1u2(ξ,η,τ)|2=e2(σ1|ξ|1κ+σ2|η|1κ)|^u1(ξξ1,ηη1,ττ1)^u2(ξ1,η1,τ1) dξ1dη1dτ1|2|eσ1|ξξ1|1κ+σ2|ηη1|1κ^u1(ξξ1,ηη1,ττ1)eσ1|ξ1|1κ+σ2|η1|1κ^u2(ξ1,η1,τ1) dξ1dη1dτ1|2=|^Aσ1,σ2κuAσ1,σ2κv|2.

    By Lemma 3.2, we get

    x(u1u2)Xσ1,σ2,κs1,s2,bx(Aσ1,σ2κu1Aσ1,σ2κu2)Xs1,s2,bAσ1,σ2κu1Xs1,s2,bAσ1,σ2κvXs1,s2,b=u1Xσ1,σ2,κs1,s2,bu2Xσ1,σ2,κs1,s2,b.

    The above Lemmas will be used without somtimes mention to prove Theorem 2.1.

    Lemma 4.1. Let s1>α14+4αϵ,s20,α4,σ10,σ20,κ1,b=12+ϵ,b=12+2ϵ and 0<δ<1. Then

    Φ(u)Xσ1,σ2,κs1,s2,bCfGσ1,σ2,κs1,s2+Cδϵu2Xσ1,σ2,κs1,s2,b,

    and

    Φ(u1)Φ(u2)Xσ1,σ2,κs1,s2,b12u1u22Xσ1,σ2,κs1,s2,b.

    Proof. Combining Lemma 3.3 and Lemma 3.1 with the fixed point Theorem. We define

    B(0,2CfGσ1,σ2,κs1,s2)={u:uXσ1,σ2,κs1,s2,b2CfGσ1,σ2,κs1,s2}.

    Then, we have

    Φ(u)Xσ1,σ2,κs1,s2,bψ(t)S(t)fXσ1,σ2,κs1,s2,b+12ψδ(t)t0S(tt)xu2dtXσ1,σ2,κs1,s2,bCfGσ1,σ2,κs1,s2+Cδϵxu2Xσ1,σ2,κs1,s2,bCfGσ1,σ2,κs1,s2+Cδϵu2Xσ1,σ2,κs1,s2,b.

    We choose δ such that

    δ<1(C2fXσ1,σ2,κs1,s2,b)1ϵ. (4.1)

    We have

    Φ(u)Xσ1,σ2,κs1,s2,b2CfXσ1,σ2,κs1,s2,b.

    Thus, Φ(u) maps B(0,2CfGσ1,σ2,κs1,s2) into B(0,2CfGσ1,σ2,κs1,s2) which is a contraction, since

    Φ(u1)Φ(u2)Xσ1,σ2,κs1,s2,bC12ψδ(t)t0S(tt)(xu21xu21)dtXσ1,σ2,κs1,s2,bCδϵu1u2Xσ1,σ2,κs1,s2,b[u1Xσ1,σ2,κs1,s2,b+u2Xσ1,σ2,κs1,s2,b]4C2δϵfGσ1,σ2,κs1,s2u1u2Xσ1,σ2,κs1,s2,b12u1u2Xσ1,σ2,κs1,s2,b.

    Here we choose δ such that

    δ<1(8C2fXσ1,σ2,κs1,s2,b)1ϵ. (4.2)

    We choose the time of existence where

    δ=c0(1+fXσ1,σ2,κs1,s2,b)1ϵ.

    For appropriate choice of c0, this will satisfy inequalities (4.1) and (4.2).

    From Lemma 4.1, we see that for initial data f(x,y)Gσ1,σ2,κs1,s2(R2) if the lifespan δ=c0/(1+fXσ1,σ2,κs1,s2,b)1ϵ then the map Φ(u) is a contraction on a small ball centered at the origin in Xσ1,σ2,κs1,s2,b. Hence, the map Φ(u) has a unique fixed point u in a neighborhood of 0 with respect to the norm Xσ1,σ2,κs1,s2,b.

    The rest of the proof follows the standard argument.

    In this section, we prove Theorem 2.2. The first step is to obtain estimates on the growth of the norm of the solutions. For this end, we need to prove the following approximate conservation law.

    Theorem 4.2. Let σ1>0 and δ be as in Theorem 2.1, there exist b(1/2,1) and C>0, such that for any solution uXσ1,00,b(I) to the Cauchy problem (1.1) on the time interval I[0,δ], we have the estimate

    supt[0,δ]u(t)2Gσ1,00f2Gσ1,00+Cσϱ1u3Xσ1,00,b(I). (4.3)

    with ϱ[0,34) if α=4, and ϱ=1 if α=6,8,10,.

    Before we may show the proof, let us first state some preliminary Lemmas. The first one is an immediate consequence of Lemma 12 in [15].

    Lemma 4.3. For σ>0, 0θ1 and ξ,ξ1R, we have

    eσ|ξξ1|eσ|ξ1|eσ|ξ|σθξξ1ξ1ξeσ|ξξ1|eσ|ξ1|.

    This will be used to prove the following key estimate.

    Lemma 4.4. Let Nσ1,01(u) be as in Eq (1.4) for σ10 and σ2=0. Then for b as in Lemma 3.2, we have

    Nσ1,01(u)X0,b1Cσϱ1u2Xσ1,00,b,

    with ϱ[0,34) if α=4 and ϱ=1 if α=6,8,10,...

    Proof. We first observe that the inequality in Lemma 3.2, is equivalent to

    ξθ(s1,s2,b1)ˆf(ξξ1,ηη1,ττ1)ξξ1s1ηη1s2ϕ(ξξ1,ηη1,ττ1)b1×׈g(ξ1,η1,τ1)ξ1s1η1s2ϕ(ξ1,η1,τ1)b1 dξ1dη1dτ1L2ξ,ηfL2x,ygL2x,y,

    where we denote ϕ(τ,ξ,η)=τ+m(ξ,η). With this, we observe that the left side of the inequality in Lemma 4.4 can be estimated by Lemma 4.3 as

    Nσ1,01(u)X0,b1σξξ1ϕ(τ,ξ,η)βeσ1|ξξ1|ˆu(ξξ1,ηη1,ττ1)ξξ11××eσ1|ξ1|ˆu(ξ1,η1,τ1)ξ11 dξ1dη1L2ξ,η.

    If we apply Lemma 3.2 with s1=ϱ, s2=0, it will follow, from the comments above, that

    Nσ1,01(u)X0,b1σϱ1u2Xσ1,00,b.

    Proof of Theorem 4.2. Begin by applying the operator Aσ1,01 to Eq (1.1). If we let U=Aσ1,01u, then Eq (1.1) becomes

    tUαxU+1x2yU+UxU=Nσ1,01(u), (4.4)

    where α=4,6,8,.. and Nσ1,01(u) is defined in Lemma 4.4. Multiplying (4.4) by U and integrating with respect to the spatial variables, we obtain

    UtUUαxU+U1x2yU+U2xU dxdy=UNσ1,01(u) dxdy.

    If we apply integration by parts, we may rewrite the left-hand side as

    t12U2 dxdy+α2xUα2+1xUdxdyyU1xyU dxdy+U2xU dxdy=UNσ1,01(u) dxdy,

    which can then be rewritten as

    By noticing that jxU(x,t)0 as |x| (see [15]) we obtaining

    tU2(x,y,t) dxdy=2U(x,y,t)Nσ1,01(u)(x,y,t) dxdy.

    Integrating with respect to time yields

    U2(x,y,t) dxdy=U2(x,y,0) dxdy+2t0U(x,y,t)xNσ1,01(u)(x,y,t) dxdydt.

    Applying Cauchy-Schwarz and the definition of U, we obtain

    u(t)2Gσ1,00f2Gσ1,00+uXσ1,00,bNσ1,01(u)X0,00,b1(I).

    We now apply Lemma 4.4 and the fact that b=12+ϵ, we can further estimate this by

    u(t)2Gσ1,00f2Gσ1,00+Cσϱ1u3Xσ1,00,b, (4.5)

    as desired.

    Proof of Theorem 2.2. With the tools established in the previous subsection, we may begin the proof of Theorem 2.2. Let us first suppose that T is the supremum of the set of times T for which

    uC([0,T];Gσ1,0x).

    If T=, there is nothing to prove, so let us assume that T<. In this case, it suffices to prove that

    uC([0,T],Gσ1(T),0x), (4.6)

    for all T>T. To show that this is the case, we will use Theorem 2.1 and Theorem 4.2 to construct a solution which exists over subintervals of width δ, using the parameter σ1 to control the growth of the norm of the solution. We first prove the case s=0 and then we will generalize the case.

    The desired result will follow from the following proposition.

    Proposition 4.5. Let T>0, x=0, 0<σ1σ0 and δ>0 be numbers such that nδT<(n+1)δ. Then the solution u to the Cauchy problem (1.1) satisfies

    supt[0,nδ]u(t)2Gσ1,00f2Gσ1,00+23Cσϱ0nf3Gσ0,00, (4.7)

    and

    supt[0,nδ]u(t)2Gσ1,004u(t)2Gσ0,00, (4.8)

    if

    σ1=C1T1ϱ,and  C1=(c0C25fGσ0,00(1+2fGσ0,00)1ϵ)1ϱ,

    for some constant C>0.

    Proof. For fixed TT, we will prove, for sufficiently small σ1>0, that

    supt[0,T]u(0)2Gσ1,004u(0)2Gσ0,00. (4.9)

    We will use the Theorem 2.1 and Theorem 4.2 with the time step

    δ=c0(1+4fGσ0,00)1ϵ. (4.10)

    The smallness conditions on σ1 will be

    σ1σ0  and   2TδCσϱ1432fGσ0,001, (4.11)

    where C>0 is the constant in Theorem 4.2. Proceeding by induction, we will verify that

    supt[0,δ]u(t)2Gσ1,00f2Gσ1,00+nCσϱ123f3Gσ0,00, (4.12)
    supt[0,δ]u(t)2Gσ1,004f2Gσ0,00, (4.13)

    for n{1,,m+1}, where mN is chosen, so that T[mδ,(m+1)δ). This m does exist, since by Theorem 2.1 and the definition of T, we have

    δ<c0(1+fGσ0,00)1ϵ<T,  hence  δ<T.

    We cover now, the interval [0,δ], and by Theorem 4.2, we have

    supt[0,δ]u(t)2Gσ1,00f2Gσ1,00+Cσϱ1f3Gσ,00f2Gσ1,00+Cσϱ1f3Gσ0,00,

    where we used that

    fGσ1,00fGσ0,00,

    since σ1σ0. This verifies (4.12) for n=1 and now, (4.13) follows using again

    fGσ1,00fGσ0,00,

    as well as Cσϱ1fGσ0,001. Next, assuming that (4.12) and (4.13) hold for some n{1,,m}, we will prove that they hold for n+1. We estimate

    supt[nδ,(n+1)δ]u(t)2Gσ1,00u(nδ)2Gσ1,00+Cσϱ1u(nδ)3Gσ1,00u(nδ)2Gσ1,00+Cσϱ123f3Gσ0,00f2Gσ1,00+nCσϱ123f3Gσ0,00+Cσϱ123f3Gσ0,00, (4.14)

    verifying (4.12) with n replaced by n+1. To get (4.13) with n replaced by n+1, it is then enough to have

    (n+1)Cσϱ123fGσ0,001.

    But this holds by (4.11), since

    n+1m+1Tδ+1<2Tδ.

    Finally, the condition (4.11) is satisfied for σ1(0,σ0) such that

    2TδCσϱ123fGσ0,00=1.

    Thus, σ1=C1T1ϱ, where

    C1=(c0C25fGσ0,00(1+2fGσ0,00)1ϵ)1ϱ.

    For general s, we have

    u0Gσ0,0sGσ0/2,00.

    The case s=0 already being proved, we know that there is a T1>0, such hat

    uC([0,T1),Gσ0/2,00),

    and

    uC([0,T],G2ςT1/ϱ,00),  for TT1,

    where ς>0 depends on f,σ0 and ς. We now conclude that

    uC([0,T1),Gσ0/4,0s),

    and

    uC([0,T],GςT1/ϱ,0s),  for TT1.

    The proof of Theorem 2.2 is now completed.

    We follow the methods found in [5,6,7,11] to treat the regularity in time in Gevrey sens for unique solution of (1.1).

    Proposition 5.1. Let δ>0,s1>α14, s20 and

    uC([0,δ];Gσ1,σ2,κs1,s2),

    be the solution of (1.1). Then u belong in x,y to Gκ for all times near the zero. In other words,

    |lxnyu(x,y,t)|Cl+n+1(l!)κ(n!)κ,  (5.1)

    for all (x,y)R2,C>0,t[0,δ],l,n{0,1,...}.

    Proof. We have, for any t[0,δ]

    lxnyu(,,t)2Hs1,s2=R2ξ2s1η2s2|^lxnyu(ξ,η,t)|2dξdη=R2|ξ|2l|η|2nξ2s1η2s2|ˆu(ξ,η,t)|2dξdη=R2|ξ|2l|η|2ne2σ1|ξ|1κe2σ2|η|1κξ2s1η2s2e2σ1|ξ|1κe2σ2|η|1κ|ˆu(ξ,η,t)|2dξdη.

    We observe that

    e2σ1κ|ξ|1κ=j=01j!(2σ1κ|ξ|1κ)j1(2l)!(2σ1κ)2l|ξ|2lκ,l{0,1,...},ξR,

    and

    e2σ2κ|η|1κ=j=01j!(2σ2κ|η|1κ)j1(2n)!(2σ2κ)2n|η|2lκ,n{0,1,...},ηR.

    This implies that

    |ξ|2le2σ1|ξ|1κC2lσ1,κ(2l)!κ,
    |η|2ne2σ2|η|1κC2nσ2,κ(2n)!κ.

    Thus,

    lxnyu(,,t)2Hs1,s2C2l+2nσ1,σ2,κ(2l)!κ(2n)!κR2ξ2s1η2s2e2σ1|ξ|1κe2σ2|η|1κ|ˆu(ξ,η,t)|2dξdη=C2l+2nσ1,σ2,κ(2l)!κ(2n)!κu(,,t)2Gσ1,σ2,κs1,s2.

    Since (2l)!κc2l1(l!)2κ and (2n)!κc2n2(n!)2κ, for some c1,c2>0, we have for all l,n{0,1,2,}

    |lxnyu|lxnyu(,,t)Hs1,s2C0Cl+n1(l)!κ(n)!κ   fort[0,δ],

    where C0=u(,,t)Gσ1,σ2,κs1,s2 and C1=c20Cσ1,σ2,κ and c0=max(c1,c2), which implies that the solution u is analytic in x,y for all time near zero and s1,s20.

    Now, for α14<s1<0, s20 and for any 0<ϵ<σ1, there exists a positive constant C=Cs,ϵ>0 such that

    R2e2(σ1ϵ)|ξ|1κe2σ2|η|1κη2s2|ˆu(ξ,η,t)|2dξdηCR2e2ϵ|ξ|1κξ2s1e2(σ1ϵ)|ξ|1κe2σ2|η|1κη2s2|ˆu(ξ,η,t)|2dξdη=CR2ξ2s1η2s2e2σ1|ξ|1κe2σ2|η|1κ|ˆu(ξ,η,t)|2dξdη.

    This implies that if

    uC([0,T];Gσ1,σ2,κs1,s2) and s1<0,s20,

    then

    uC([0,T];Gσ1ϵ,σ2,κ0,s2),

    which allows us to conclude that u is in Gκ in x,y for all s1>α14, s20.

    In order to prove Theorem 2.3 it is enough to prove the following result.

    Lemma 5.2. For j,l,n{0,1,2,}, the next inequality

    |jtlxnyu|Cj+l+n+1((l+n+(α+1)j)!)κLj, (5.2)

    holds, where L=Cα+1120κ+C40κ, x,yR, t[0,δ].

    In fact, taking l=n=0 we obtain

    |jtu|Cj+1((α+1)j!)κLjKj+1(j!)(α+1)κ.

    Proof. We use the induction on j to prove Lemma 5.2.

    For j=0 and l,n{0,1,2,}, we have, by (5.1)

    |lxnyu(x,y,t)|Cl+n+1(l!)κ(n!)κCl+n+1(l+n)!κ. (5.3)

    For j=1 and l,n{0,1,2,}, we have

    |tlxkyu|||Dx|αl+1xnyu|+|l1xn+2yu|+|lxny(uxu)|. (5.4)

    The terms of (5.4) can be estimated as

    ||Dx|αl+1xnyu|Cl+1+α+n+1(l+1+α+n)!κCl+n+1+1(l+n+(α+1)1)!κCα, (5.5)
    |l1xn+2yu|Cl1+n+2+1(l1+n+2)!κCl+n+1+1(l+n+51)!1(l+n+2)κ(l+n+3)κ(l+n+4)κ(l+n+5)κCl+n+1+1(l+n+51)!1120κ. (5.6)

    The nonlinear terms are treated as follows

    |lxny(uxu)|=|lp=0nk=0(lp)(nk)(lpxnkyu)(p+1xkyu)|.

    Recalling that for lp and nk, we have the next inequality

    (lp)(nk)(l+np+k). (5.7)

    By (5.7), we have

    |lxny(uxu)||lp=0nk=0(l+np+k)(lpxnkyu)(p+1xkyu)|lp=0nk=0((l+n)!)κ((p+k)!)κ((l+npk)!)κClp+nk+1((l+npk)!)κCp+1+k+1((p+1+k)!)κ=Cl+n+3((l+n)!)κlp=0nk=0(p+1+k)κ.

    At this stage, we use the fact that

    lp=0nk=0(p+1+k)=(l+1)(n+1)(l+n+2)2. (5.8)

    Then,

    |lxny(uxu)|Cl+n+3((l+n)!)κ(l+1)κ(n+1)κ(l+n+2)κ2κCl+n+1+1((l+n)!)κ(l+n+1)κ(l+n+2)κ(l+n+3)κC2κ=Cl+n+1+1((l+n+(α+1))!)κ1(l+n+4)κ(l+n+(α+1))κC2κCl+n+1+1((l+n+(α+1)1)!)κC40κ. (5.9)

    From (5.5), (5.6) and (5.9), it follows that

    |tlxkyu|Cl+n+1+1((l+n+(α+1)1)!)κL1,x,yR,t[0,δ].

    We assume that (5.2) is correct for jm1 where l,n{0,1,2,} and then we prove it for m=j+1 and l,n{0,1,2,}.

    We obtain

    |j+1tlxkyu||jt|Dx|αl+1xnyu|+|jtl1xn+2yu|+|jtlxny(uxu)|.

    These terms are estimated as follows

    |jt|Dx|αl+1xnyu|Cj+l+(α+1)+n+1(l+n+((α+1)(j+1))!)κLjC(j+1)+l+n+1(l+n+((α+1)(j+1))!)κCαLj, (5.10)

    and

    |jtl1xn+2yu|Cj+l1+n+2+1((j+l1+n+2)!)κLjC(j+1)+l+n+1((l+n+(α+1)(j+1))!)κLj120κ. (5.11)

    The nonlinear terms are treated as follows

    jtlxny(uxu)=lp=0nk=0(lp)(nk)(jtlpxnkyu)(p+1xkyu)+lp=0nk=0(lp)(nk)(lpxnkyu)(jtp+1xkyu)+j1q=1lp=0nk=0(jq)(lp)(nk)(jqtlpxnkyu)(qtp+1xkyu). (5.12)

    Using (5.7) to estimate (5.12)1

    |lp=0nk=0(lp)(nk)(jtlpxnkyu)(p+1xkyu)|13C(j+1)+l+n+1((l+n+(α+1)(j+1))!)κC40κLj. (5.13)

    We estimate (5.12)2 as

    |lp=0nk=0(lp)(nk)(lpxnkyu)(jtp+1xkyu)|13C(j+1)+l+n+1((l+n+(α+1)(j+1))!)κC40κLj. (5.14)

    To estimate (5.12)3, we recall that for jq, lp and nk, we have the next inequality

    (jq)(lp)(nk)(j+l+nq+p+k).

    Then

    |j1q=1lp=0nk=0(jq)(lp)(nk)(jqtlpxnkyu)(qtp+1xkyu)|j1q=1lp=0nk=0(j+l+nq+p+k)Cjq+lp+nk+1((lp+nk+(α+1)(jq))!)κLjqCq+p+1+k+1((p+1+k+(α+1)q)!)κLq13C(j+1)+l+n+1((l+n+(α+1)(j+1))!)κC40κLj. (5.15)

    Finally by using (5.13)-(5.15) we arrive at

    |j+1tlxkyu|C(j+1)+l+n+1((l+n+(α+1)(j+1))!)κLj+1,

    for all (x,y)R2,t[0,δ].

    The detailed proof of (5.12) for κ=1 is given in [6].

    We have discussed the local well-posedness for a generalized Kadomtsev-Petviashvili I equation in an anisotropic Gevrey space. We proved the existence of solutions using the Banach contraction mapping principle. This was done by using the bilinear estimates in anisotropic Gevrey-Bourgain. We used this local result and a Gevrey approximate conservation law to prove that global solutions exist. These solutions are Gevrey class of order (α+1)κ in the time variable. The results of the present paper are new and significantly contribute to the existing literature on the topic.

    The authors wish to thank deeply the anonymous referee for his/here useful remarks and his/here careful reading of the proofs presented in this paper.

    The authors declare that they have no conflict of interest.



    [1] Cardiovascular Diseases, 2021. Available from: https://www.who.int/health-topics/cardiovascular-diseases.
    [2] D. Chicco, G. Jurman, Machine learning can predict survival of patients with heart failure from serum creatinine and ejection fraction alone, BMC Med. Inf. Decis. Making, 20 (2020), 1–16. doi: 10.1186/s12911-020-1023-5.
    [3] P. Ghosh, S. Azam, M. Jonkman, A. Karim, F. J. M. Shamrat, E. Ignatious, et al., Efficient prediction of cardiovascular disease using machine learning algorithms with relief and LASSO feature selection techniques, IEEE Access, 9 (2021), 19304–19326. doi: 10.1109/ACCESS.2021.3053759.
    [4] Y. Chen, X. Qin, L. Zhang, B. Yi, A novel method of heart failure prediction based on DPCNN XGBoost model, Comput. Mater. Con., 65 (2020), 495–510. doi: 10.32604/cmc.2020.011278.
    [5] I. Ahmad, S. U. Rehman, I. U. Khan, A. Ali, H. Hussain, S. Jan, et al., A hybrid approach for automatic aorta segmentation in abdominal 3D CT scan images, J. Med. Imaging Health Inf., 11 (2021), 712–719. doi: 10.1166/jmihi.2021.3364.
    [6] G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, et al., A survey on deep learning in medical image analysis, Med. Image Anal., 42 (2017), 60–88. doi: 10.1016/j.media.2017.07.005.
    [7] I. Ahmad, M. A. Alqarni, A. A. Almazroi, A. Tariq, Experimental evaluation of clickbait detection using machine learning models, Intell. Autom. Soft Comput., 26 (2020), 1335–1344. doi: 10.32604/iasc.2020.013861.
    [8] B. S. Freeman, G. Taylor, B. Gharabaghi, J. Thé, Forecasting air quality time series using deep learning, J. Air Waste Manage. Assoc., 68 (2018), 866–886. doi: 10.1080/10962247.2018.1459956.
    [9] M. Munawar, I. Noreen, Duplicate frame video forgery detection using siamese-based RNN, Intell. Autom. Soft Comput., 29 (2021), 927–937. doi: 10.32604/iasc.2021.018854.
    [10] I. Ahmad, G. Ahmed, S. A. A. Shah, E. Ahmad, A decade of big data literature: analysis of trends in light of bibliometrics, J. Supercomput., 76 (2020), 3555–3571. doi: 10.1007/s11227-018-2714-x.
    [11] O. B. Sezer, M.U. Gudelek, A. M. Ozbayoglu, Financial time series forecasting with deep learning: a systematic literature review: 2005–2019, Appl. Soft Comput., 90 (2020), 106181. doi: 10.1016/j.asoc.2020.106181.
    [12] I. Ahmad, M. Yousaf, S. Yousaf, M. O. Ahmad, Fake news detection using machine learning ensemble methods, Complexity, 20 (2020). doi: 10.1155/2020/8885861.
    [13] T. Ahmad, A. Munir, S. H. Bhatti, M. Aftab, M. A. Raza, Survival analysis of heart failure patients: a case study, PloS One, 12 (2017), e0181001. doi: 10.1371/journal.pone.0181001.
    [14] Y. Khourdifi, M. Bahaj, Heart disease prediction and classification using machine learning algorithms optimized by particle swarm optimization and ant colony optimization, Int. J. Intell. Eng. Syst., 12 (2019), 242–252. doi: 10.22266/ijies2019.0228.24.
    [15] D. Shah, S. Patel, S. K. Bharti, Heart disease prediction using machine learning techniques, SN Comput. Sci., 1 (2020), 1–6. doi: 10.1007/s42979-020-00365-y.
    [16] M. Diwakar, A. Tripathi, K. Joshi, M. Memoria, P. Singh, N. Kumar, Latest trends on heart disease prediction using machine learning and image fusion, Mater. Today Proc., 37 (2021), 3213–3218. doi: 10.1016/j.matpr.2020.09.078.
    [17] S. Mohan, C. Thirumalai, G. Srivastava, Effective heart disease prediction using hybrid machine learning techniques, IEEE Access, 7 (2019), 81542–81554. doi: 10.1109/ACCESS.2019.2923707.
    [18] M. Porum, E. Iadanza, S. Massaro, L. Pecchia, A convolutional neural network approach to detect congestive heart failure, Biomed. Signal Process. Control, 55 (2020), 101597. doi: 10.1016/j.bspc.2019.101597.
    [19] C. B. Monti, M. Codari, M. van Assen, C. N. De Cecco, R. Vliegenthart, Machine learning and deep neural networks applications in computed tomography for coronary artery disease and myocardial perfusion, J. Thorac. Imaging, 35 (2020), S58–S65. doi: 10.1097/RTI.0000000000000490.
    [20] Z. Zhang, Y. Qiu, X. Yang, M. Zhang, Enhanced character-level deep convolutional neural networks for cardiovascular disease prediction, BMC Med. Inf. Decis. Making, 20 (2020), 1–10. doi: 10.1186/s12911-020-1118-z.
    [21] K. Rahimi, D. Bennett, N. Conrad, T. M. Williams, J. Basu, J. Dwight, et al., Risk prediction in patients with heart failure: a systematic review and analysis, JACC Heart Failure, 2 (2014), 440–446. doi: 10.1016/j.jchf.2014.04.008.
    [22] E. Tripoliti, T. Papadopoulos, G. Karanasiou, K. Naka, D. Fotiadis, Heart failure: diagnosis, severity estimation and prediction of adverse events through machine learning techniques, Comput. Struct. Biotechnol. J., 12 (2017), 26–47. doi: 10.1016/j.csbj.2016.11.001.
  • This article has been cited by:

    1. Isaac Kofi Nti, Owusu Nyarko-Boateng, Adebayo Felix Adekoya, Patrick Kwabena Mensah, Mighty Abra Ayidzoe, Godfred Kusi Fosu, Henrietta Adjei Pokuaa, R. Arjun, 2023, Chapter 27, 978-981-19-6630-9, 383, 10.1007/978-981-19-6631-6_27
    2. Amina Magomedovna Alieva, Elena Vladimirovna Reznik, Natalia Vadimovna Teplova, Leyla Ramazanovna Sarakaeva, Elena Valerievna Surskaya, Dzhannet Anuarovna Elmurzaeva, Madina Yakubovna Shavaeva, Alik Magomedovich Rakhaev, Irina Aleksandrovna Kotikova, Igor Gennadievich Nikitin, Interleukin-13 and cardiovascular diseases: literature review, 2022, 28, 2412-9100, 291, 10.17816/medjrf109605
    3. Marianne Lyne Manaog, Luca Parisi, Funnel Random Forest: Inliers-Focused Ensemble Learning for Improved Prognostics of Heart Failure, 2022, 1556-5068, 10.2139/ssrn.4132314
    4. Mohammadreza Salehirad, Mohammad Mollaie Emamzadeh, Mojtaba Barkhordari Yazdi, Improved Equilibrium Optimizer for Accurate Training of Feedforward Neural Networks, 2024, 33, 1060-992X, 133, 10.3103/S1060992X24700048
    5. Betimihirt G. Tsehay, Abdulkeirm M. Yibre, 2025, Chapter 2, 978-3-031-64150-3, 21, 10.1007/978-3-031-64151-0_2
    6. Sorif Hossain, Mohammad Kamrul Hasan, Mohammad Omar Faruk, Nelufa Aktar, Riyadh Hossain, Kabir Hossain, Machine learning approach for predicting cardiovascular disease in Bangladesh: evidence from a cross-sectional study in 2023, 2024, 24, 1471-2261, 10.1186/s12872-024-03883-2
    7. Sonam Palden Barfungpa, Leena Samantaray, Hiren Kumar Deva Sarma, SMOTE-based adaptive coati kepler optimized hybrid deep network for predicting the survival of heart failure patients, 2024, 83, 1573-7721, 65497, 10.1007/s11042-023-18061-3
    8. Charanjeet Gaba, Sonam Khattar, Sheenam Middha, 2023, An Empirical Study of Machine Learning Methods for Analyzing Cardiovascular Disease, 9798400709418, 1, 10.1145/3647444.3647834
    9. Patrizia Ribino, Claudia Di Napoli, Giovanni Paragliola, Luca Serino, Hyper-Parameter Optimization through Reinforcement Learning for Survival Prediction of Patients with Heart Failure, 2024, 239, 18770509, 1754, 10.1016/j.procs.2024.06.354
    10. Abdallah Abdellatif, Hamza Mubarak, Hamdan Abdellatef, Jeevan Kanesan, Yahya Abdelltif, Chee-Onn Chow, Joon Huang Chuah, Hassan Muwafaq Gheni, Graham Kendall, Computational detection and interpretation of heart disease based on conditional variational auto-encoder and stacked ensemble-learning framework, 2024, 88, 17468094, 105644, 10.1016/j.bspc.2023.105644
    11. Megha Bhushan, Akkshat Pandit, Ayush Garg, Machine learning and deep learning techniques for the analysis of heart disease: a systematic literature review, open challenges and future directions, 2023, 56, 0269-2821, 14035, 10.1007/s10462-023-10493-5
    12. Joseph Chukwudi Okeibunor, Anelisa Jaca, Chinwe Juliana Iwu-Jaja, Ngozi Idemili-Aronu, Housseynou Ba, Zukiswa Pamela Zantsi, Asiphe Mavis Ndlambe, Edison Mavundza, Derrick Muneene, Charles Shey Wiysonge, Lindiwe Makubalo, The use of artificial intelligence for delivery of essential health services across WHO regions: a scoping review, 2023, 11, 2296-2565, 10.3389/fpubh.2023.1102185
    13. Ashir Javeed, Muhammad Asim Saleem, Ana Luiza Dallora, Liaqat Ali, Johan Sanmartin Berglund, Peter Anderberg, Decision Support System for Predicting Mortality in Cardiac Patients Based on Machine Learning, 2023, 13, 2076-3417, 5188, 10.3390/app13085188
    14. Betimihirt Getnet Tsehay Demis, Abdulkerim M. Yibre, 2024, Chapter 7, 978-3-031-57623-2, 117, 10.1007/978-3-031-57624-9_7
    15. Ling Xue, Wei Lu, 2024, Chapter 11, 978-3-031-47125-4, 159, 10.1007/978-3-031-47126-1_11
    16. Xiaonan Si, Lei Wang, Wenchang Xu, Biao Wang, Wenbo Cheng, Highly Imbalanced Classification of Gout Using Data Resampling and Ensemble Method, 2024, 17, 1999-4893, 122, 10.3390/a17030122
    17. Yogendra Narayan, Mandeep Kaur Ghumman, Charanjeet Gaba, 2024, Chapter 11, 978-981-99-8128-1, 121, 10.1007/978-981-99-8129-8_11
    18. Pranav Kumar Sandilya, Deep Pal, Shrey Rajesh Dahiya, Soven K. Dana, 2024, Prognostic Modeling for Heart Failure Survival: A Classification Approach, 979-8-3503-0843-3, 415, 10.1109/SPIN60856.2024.10512260
    19. Xin-yi Zhou, Yu-mei Li, Ju-kun Su, Yan-feng Wang, Jin Su, Qiao-hong Yang, Effects of posttraumatic growth on psychosocial adjustment in young and middle-aged patients with acute myocardial infarction: The mediating role of rumination, 2023, 62, 01479563, 81, 10.1016/j.hrtlng.2023.06.003
    20. Chunjie Zhou, Pengfei Dai, Aihua Hou, Zhenxing Zhang, Li Liu, Ali Li, Fusheng Wang, A comprehensive review of deep learning-based models for heart disease prediction, 2024, 57, 1573-7462, 10.1007/s10462-024-10899-9
    21. Soundararajan Sankaranarayanan, Elangovan Gunasekaran, Amir shaikh, S Govinda Rao, A novel survival analysis of machine using fuzzy ensemble convolutional based optimal RNN, 2023, 234, 09574174, 120966, 10.1016/j.eswa.2023.120966
    22. Shailendra Kumar Mishra, R. Sreelakshmy, Gaurav Kumar, Dheeraj Kumar, Sujeet Kumar, 2024, chapter 11, 9798369355633, 215, 10.4018/979-8-3693-5563-3.ch011
    23. Qisthi A Hidayaturrohman, Eisuke Hanada, Predictive Analytics in Heart Failure Risk, Readmission, and Mortality Prediction: A Review, 2024, 2168-8184, 10.7759/cureus.73876
    24. Emmanuel Kokori, Ravi Patel, Gbolahan Olatunji, Bonaventure Michael Ukoaka, Israel Charles Abraham, Victor Oluwatomiwa Ajekiigbe, Julia Mimi Kwape, Adetola Emmanuel Babalola, Ntishor Gabriel Udam, Nicholas Aderinto, Machine learning in predicting heart failure survival: a review of current models and future prospects, 2024, 1573-7322, 10.1007/s10741-024-10474-y
    25. Monir Abdullah, Artificial intelligence-based framework for early detection of heart disease using enhanced multilayer perceptron, 2025, 7, 2624-8212, 10.3389/frai.2024.1539588
    26. Alireza Jafarkhani, Behzad Imani, Soheila Saeedi, Amir Shams, Predicting Factors Affecting Survival Rate in Patients Undergoing On‐Pump Coronary Artery Bypass Graft Surgery Using Machine Learning Methods: A Systematic Review, 2025, 8, 2398-8835, 10.1002/hsr2.70336
    27. P. Chairmadurai, P. Kavitha, S. Kamalakkannan, 2025, Enhanced Bayesian Optimized Support Vector Machine (BO-SVM) Classification and Prediction of Heart Disease, 979-8-3315-2392-3, 1044, 10.1109/ICSADL65848.2025.10933457
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4067) PDF downloads(255) Cited by(27)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog