Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

A selective evolutionary heterogeneous ensemble algorithm for classifying imbalanced data

  • Learning from imbalanced data is a challenging task, as with this type of data, most conventional supervised learning algorithms tend to favor the majority class, which has significantly more instances than the other classes. Ensemble learning is a robust solution for addressing the imbalanced classification problem. To construct a successful ensemble classifier, the diversity of base classifiers should receive specific attention. In this paper, we present a novel ensemble learning algorithm called Selective Evolutionary Heterogeneous Ensemble (SEHE), which produces diversity by two ways, as follows: 1) adopting multiple different sampling strategies to generate diverse training subsets and 2) training multiple heterogeneous base classifiers to construct an ensemble. In addition, considering that some low-quality base classifiers may pull down the performance of an ensemble and that it is difficult to estimate the potential of each base classifier directly, we profit from the idea of a selective ensemble to adaptively select base classifiers for constructing an ensemble. In particular, an evolutionary algorithm is adopted to conduct the procedure of adaptive selection in SEHE. The experimental results on 42 imbalanced data sets show that the SEHE is significantly superior to some state-of-the-art ensemble learning algorithms which are specifically designed for addressing the class imbalance problem, indicating its effectiveness and superiority.

    Citation: Xiaomeng An, Sen Xu. A selective evolutionary heterogeneous ensemble algorithm for classifying imbalanced data[J]. Electronic Research Archive, 2023, 31(5): 2733-2757. doi: 10.3934/era.2023138

    Related Papers:

    [1] Sami Ul Haq, Saeed Ullah Jan, Syed Inayat Ali Shah, Ilyas Khan, Jagdev Singh . Heat and mass transfer of fractional second grade fluid with slippage and ramped wall temperature using Caputo-Fabrizio fractional derivative approach. AIMS Mathematics, 2020, 5(4): 3056-3088. doi: 10.3934/math.2020198
    [2] Kehong Zheng, Fuzhang Wang, Muhammad Kamran, Rewayat Khan, Ali Sikandar Khan, Sadique Rehman, Aamir Farooq . On rate type fluid flow induced by rectified sine pulses. AIMS Mathematics, 2022, 7(2): 1615-1627. doi: 10.3934/math.2022094
    [3] J. Kayalvizhi, A. G. Vijaya Kumar, Ndolane Sene, Ali Akgül, Mustafa Inc, Hanaa Abu-Zinadah, S. Abdel-Khalek . An exact solution of heat and mass transfer analysis on hydrodynamic magneto nanofluid over an infinite inclined plate using Caputo fractional derivative model. AIMS Mathematics, 2023, 8(2): 3542-3560. doi: 10.3934/math.2023180
    [4] Muhammad Imran Asjad, Muhammad Haris Butt, Muhammad Armaghan Sadiq, Muhammad Danish Ikram, Fahd Jarad . Unsteady Casson fluid flow over a vertical surface with fractional bioconvection. AIMS Mathematics, 2022, 7(5): 8112-8126. doi: 10.3934/math.2022451
    [5] Asifa, Poom Kumam, Talha Anwar, Zahir Shah, Wiboonsak Watthayu . Analysis and modeling of fractional electro-osmotic ramped flow of chemically reactive and heat absorptive/generative Walters'B fluid with ramped heat and mass transfer rates. AIMS Mathematics, 2021, 6(6): 5942-5976. doi: 10.3934/math.2021352
    [6] Ritu Agarwal, Mahaveer Prasad Yadav, Dumitru Baleanu, S. D. Purohit . Existence and uniqueness of miscible flow equation through porous media with a non singular fractional derivative. AIMS Mathematics, 2020, 5(2): 1062-1073. doi: 10.3934/math.2020074
    [7] Álvaro Abucide, Koldo Portal, Unai Fernandez-Gamiz, Ekaitz Zulueta, Iker Azurmendi . Unsteady-state turbulent flow field predictions with a convolutional autoencoder architecture. AIMS Mathematics, 2023, 8(12): 29734-29758. doi: 10.3934/math.20231522
    [8] Geetika Saini, B. N. Hanumagowda, S. V. K. Varma, Jasgurpreet Singh Chohan, Nehad Ali Shah, Yongseok Jeon . Impact of couple stress and variable viscosity on heat transfer and flow between two parallel plates in conducting field. AIMS Mathematics, 2023, 8(7): 16773-16789. doi: 10.3934/math.2023858
    [9] M. Hamid, T. Zubair, M. Usman, R. U. Haq . Numerical investigation of fractional-order unsteady natural convective radiating flow of nanofluid in a vertical channel. AIMS Mathematics, 2019, 4(5): 1416-1429. doi: 10.3934/math.2019.5.1416
    [10] Shabiha Naz, Tamizharasi Renganathan . An exact asymptotic solution for a non-Newtonian fluid in a generalized Couette flow subject to an inclined magnetic field and a first-order chemical reaction. AIMS Mathematics, 2024, 9(8): 20245-20270. doi: 10.3934/math.2024986
  • Learning from imbalanced data is a challenging task, as with this type of data, most conventional supervised learning algorithms tend to favor the majority class, which has significantly more instances than the other classes. Ensemble learning is a robust solution for addressing the imbalanced classification problem. To construct a successful ensemble classifier, the diversity of base classifiers should receive specific attention. In this paper, we present a novel ensemble learning algorithm called Selective Evolutionary Heterogeneous Ensemble (SEHE), which produces diversity by two ways, as follows: 1) adopting multiple different sampling strategies to generate diverse training subsets and 2) training multiple heterogeneous base classifiers to construct an ensemble. In addition, considering that some low-quality base classifiers may pull down the performance of an ensemble and that it is difficult to estimate the potential of each base classifier directly, we profit from the idea of a selective ensemble to adaptively select base classifiers for constructing an ensemble. In particular, an evolutionary algorithm is adopted to conduct the procedure of adaptive selection in SEHE. The experimental results on 42 imbalanced data sets show that the SEHE is significantly superior to some state-of-the-art ensemble learning algorithms which are specifically designed for addressing the class imbalance problem, indicating its effectiveness and superiority.



    Non-linear partial differential equations are extensively used in science and engineering to model real-world phenomena [1,2,3,4]. Using fractional operators like the Riemann-Liouville (RL) and the Caputo operators which have local and singular kernels, it is difficult to express many non-local dynamics systems. Thus to describe complex physical problems, fractional operators with non-local and non-singular kernels [5,6] were defined. The Atangana-Baleanu (AB) fractional derivative operator is one of these type of fractional operators which is introduced by Atangana and Baleanu[7].

    The time fractional Kolmogorov equations (TF-KEs) are defined as

    ABDγtg(s,t)=ϑ1(s)Dsg(s,t)+ϑ2(s)Dssg(s,t)+ω(s,t),0<γ1, (1.1)

    with the initial and boundary conditions

    g(s,0)=d0(s),g(0,t)=d1(t),g(1,t)=d2(t),

    where (s,t)[0,1]×[0,1], ABDγt denotes the Atangana-Baleanu (AB) derivative operator, Dsg(s,t)=sg(s,t) and Dssg(s,t)=2s2g(s,t). If ϑ1(s) and ϑ2(s) are constants, then Eq (1.1) is presenting the time fractional advection-diffusion equations (TF-ADEs).

    Many researchers are developing methods to find the solution of partial differential equations of fractional order. Analytical solutions or formal solutions of such type of equations are difficult; therefore, numerical simulations of these equations inspire a large amount of attentions. High accuracy methods can illustrate the anomalous diffusion phenomenon more precisely. Some of the efficient techniques are Adomian decomposition [8,9], a two-grid temporal second-order scheme [10], the Galerkin finite element method [11], finite difference [12], a differential transform [13], the orthogonal spline collocation method [14], the optimal homotopy asymptotic method [15], an operational matrix (OM) [16,17,18,19,20,21,22,23,24], etc.

    The OM is one of the numerical tools to find the solution of a variety of differential equations. OMs of fractional derivatives and integration were derived using polynomials like the Chebyshev [16], Legendre [17,18], Bernstein [19], clique [20], Genocchi [21], Bernoulli [22], etc. In this work, with the help of the Hosoya polynomial (HS) of simple paths and OMs, we reduce problem (1.1) to the solution of a system of nonlinear algebraic equations, which greatly simplifies the problem under study.

    The sections are arranged as follows. In Section 2, we review some basic preliminaries in fractional calculus and interesting properties of the HP. Section 3 presents a new technique to solve the TF-KEs. The efficiency and simplicity of the proposed method using examples are discussed in Section 5. In Section 6, the conclusion is given.

    In this section we discuss some basic preliminaries of fractional calculus and the main properties of the HP. We also compute an error bound for the numerical solution.

    Definition 2.1. (See [25]) Let 0<γ1. The RL integral of order γ is defined as

    RLIγsg(s)=1Γ(γ)s0(sξ)γ1g(ξ) dξ.

    One of the properties of the fractional order of RL integral is

    RLIγssυ=Γ(υ+1)Γ(υ+1+γ)sυ+γ,υ0.

    Definition 2.2. (See [7]) Let 0<γ1, gH1(0,1) and Φ(γ) be a normalization function such that Φ(0)=Φ(1)=1 and Φ(γ)=1γ+γΓ(γ). Then, the following holds

    1) The AB derivative is defined as

    ABDγsg(s)=Φ(γ)1γs0Eγ(γ1γ(sξ)γ)g(ξ) dξ,0<γ<1,ABDγsg(s)=g(s),γ=1,

    where Eγ(s)=j=0sjΓ(γj+1) is the Mittag-Leffler function.

    2) The AB integral is given as

    ABIγsg(s)=1γΦ(γ)g(s)+γΦ(γ)Γ(γ)s0(sξ)γ1g(ξ)dξ. (2.1)

    Let vγ=1γΦ(γ) and wγ=1Φ(γ)Γ(γ); then, we can rewrite (2.1) as

    ABIγsg(s)=vγg(s)+wγΓ(γ+1)RLIγsg(s).

    The AB integral satisfies the following property [26]:

    ABIγs(ABDγsg(s))=g(s)g(0).

    In 1988, Haruo Hosoya introduced the concept of the HP [27,28]. This polynomial is used to calculate distance between vertices of a graph [29]. In [30,31], the HP of path graphs is obtained. The HP of the path graphs is described as

    ˜H(G,s)=l0d(G,l)sl,

    where d(G,l) denotes the distance between vertex pairs in the path graph [32,33]. Here we consider path graph with vertices n where nN. Based on n vertex values the Hosoya polynomials are calculated [34]. Let us consider the path Pn with n vertices; then the HP of the Pi,i=1,2,,n are computed as

    ˜H(P1,s)=l0d(P1,l)sl=1,˜H(P2,s)=1l=0d(P2,l)sl=s+2,˜H(P3,s)=2l=0d(P3,l)sl=s2+2s+3,˜H(Pn,s)=n+(n1)s+(n2)s2++(n(n2))sn2+(n(n1))sn1.

    Consider any function g(s) in L2(0,1); we can approximate it using the HP as follows:

    g(s)˜g(s)=N+1i=1hi ˜H(Pi,s)=hTH(s), (2.2)

    where

    h=[h1,h2,,hN+1]T,

    and

    H(s)=[˜H(P1,s),˜H(P2,s),,˜H(PN+1,s)]T. (2.3)

    From (2.2), we have

    h=Q1g(s),H(s),

    where Q=H(s),H(s) and , denotes the inner product of two arbitrary functions.

    Now, consider the function g(s,t)L2([0,1]×[0,1]); then, it can be expanded in terms of the HP by using the infinite series,

    g(s,t)=i=1j=1hij˜H(Pi,s)˜H(Pj,t). (2.4)

    If we consider the first (N+1)2 terms in (2.4), an approximation of the function g(s,t) is obtained as

    g(s,t)N+1i=1N+1j=1hij˜H(Pi,s)˜H(Pj,t)=HT(s)˜hH(t), (2.5)

    where

    ˜h=Q1H(s),g(s,t),H(t)Q1.

    Theorem 2.1. The integral of the vector H(s) given by (2.3) can be approximated as

    s0H(ξ)dξRH(s), (2.6)

    where R is called the OM of integration for the HP.

    Proof. Firstly, we express the basis vector of the HP, H(s), in terms of the Taylor basis functions,

    H(s)=AˆS(s), (2.7)

    where

    ˆS(s)=[1,s,,sN]T,

    and

    A=[aq,r],q,r=1,2,,N+1,

    with

    aq,r={q(r1),qr,0,q<r.

    Now, we can write

    s0H(ξ)dξ=As0ˆS(ξ)dξ=ABS(s),

    where B=[bq,r],q,r=1,2,,N+1 is an (N+1)×(N+1) matrix with the following elements

    bq,r={1q,q=r,0,qr,

    and

    S(s)=[s,s2,,sN+1]T.

    Now, by approximating sk,k=1,2,,N+1 in terms of the HP and by (2.7), we have

    {sk=A1k+1H(s),k=1,2,,N,sN+1=LTH(s),

    where A1r, r=2,3,,N+1 is the r-th row of the matrix A1 and L=Q1sN+1,H(s). Then, we get

    S(s)=EH(s),

    where E=[A12,A13,,A1N+1,LT]T. Therefore, by taking R=ABE, the proof is completed.

    Theorem 2.2. The OM of the product based on the HP is given by (2.3) can be approximated as

    CTH(s)HT(s)HT(s)ˆC,

    where ˆC is called the OM of product for the HP.

    Proof. Multiplying the vector C=[c1,c2,,cN+1]T by H(s) and HT(s) gives

    CTH(s)HT(s)=CTH(s)(ˆST(s)AT)=[CTH(s),s(CTH(s)),,sN(CTH(s))]AT=[N+1i=1ci˜H(Pi,s),N+1i=1cis˜H(Pi,s),,N+1i=1cisN˜H(Pi,s)]AT. (2.8)

    Taking ek,i=[e1k,i,e2k,i,,eN+1k,i]T and expanding sk1˜H(Pi,s)eTk,iH(s),i,k=1,2,,N+1 using the HP, we can write

    ek,i=Q110sk1˜H(Pi,s)H(s)ds=Q1[10sk1˜H(Pi,s)˜H(P1,s)ds,10sk1˜H(Pi,s)˜H(P2,s)ds,,10sk1˜H(Pi,s)˜H(PN+1,s)ds]T.

    Therefore,

    N+1i=1cisk1˜H(Pi,s)N+1i=1ci(N+1j=1ejk,i˜H(Pj,s))=N+1j=1˜H(Pj,s)(N+1i=1ciejk,i)=HT(s)[N+1i=1cie1k,i,N+1i=1cie2k,i,,N+1i=1cieN+1k,i]T=HT(s)[ek,1,ek,2,,ek,N+1]C=HT(s)EkC, (2.9)

    where Ek is an (N+1)×(N+1) matrix and the vectors ek,i for k=1,2,,N+1 are the columns of Ek. Let ¯Ek=EkC,k=1,2,,N+1. Setting ¯C=[¯E1,¯E2,,¯EN+1] as an (N+1)×(N+1) matrix and using (2.8) and (2.9), we have

    CTH(s)HT(s)=[N+1i=1ci˜H(Pi,s),N+1i=1cis˜H(Pi,s),,N+1i=1cisN˜H(Pi,s)]ATHT(s)ˆC,

    where by taking ˆC=¯CAT, the proof is completed.

    Theorem 2.3. Consider the given vector H(s) in (2.3); the fractional RL integral of this vector is approximated as

    RLIγsH(s)PγH(s),

    where Pγ is named the OM based on the HP which is given by

    Pγ=[σ1,1,1σ1,2,1σ1,N+1,12k=1σ2,1,k2k=1σ2,2,k2k=1σ2,N+1,kN+1k=1σN+1,1,kN+1k=1σN+1,2,kN+1k=1σN+1,N+1,k],

    with

    σi,j,k=(i(k1))Γ(k)ek,jΓ(k+γ).

    Proof. First, we rewrite ˜H(Pi,s) in the following form:

    ˜H(Pi,s)=ik=1(i(k1))sk1.

    Let us apply, the RL integral operator, RLIγs, on ˜H(Pi,s),i=1,,N+1; this yields

    RLIγs˜H(Pi,s)=RLIγs(ik=1(i(k1))sk1)=ik=1(i(k1))(RLIγssk1)=ik=1(i(k1))Γ(k)Γ(k+γ)sk+γ1. (2.10)

    Now, using the HP, the function sk+γ1 is approximated as:

    sk+γ1N+1j=1ek,j˜H(Pj,s). (2.11)

    By substituting (2.11) into (2.10), we have,

    RLIγs˜H(Pi,s)=ik=1(i(k1))Γ(k)Γ(k+γ)(N+1j=1ek,j˜H(Pj,s))=N+1j=1(ik=1(i(k1))Γ(k)ek,jΓ(k+γ))˜H(Pj,s)=N+1j=1(ik=1σi,j,k)˜H(Pj,s).

    Theorem 2.4. Suppose that 0<γ1 and ˜H(Pi,x) is the HP vector; then,

    ABIγtH(s)IγH(s),

    where Iγ=vγI+wγΓ(γ+1)Pγ is called the OM of the AB-integral based on the HP and I is an (N+1)×(N+1) identity matrix.

    Proof. Applying the AB integral operator, ABIγs, on H(s) yields

    ABIγsH(s)=vγH(s)+wγΓ(γ+1)RLIγsH(s).

    According to Theorem 2.3, we have that RLIγsH(s)PγH(s). Therefore

    ABIγsH(s)=vγH(s)+wγΓ(γ+1)PγH(s)=(vγI+wγΓ(γ+1)Pγ)H(s).

    Setting Iγ=vγI+wγΓ(γ+1)Pγ, the proof is complete.

    The main aim of this section is to introduce a technique based on the HP of simple paths to find the solution of the TF-KEs. To do this, we first expand Dssg(s,t) as

    Dss g(s,t)N+1i=1N+1j=1hij˜H(Pi,s)˜H(Pj,t)=HT(s)˜hH(t). (3.1)

    Integrating (3.1) with respect to s gives

    Dsg(s,t)Dsg(0,t)+HT(s)RT˜hH(t). (3.2)

    Again integrating the above equation with respect to s gives

    g(s,t)d1(t)+sDsg(0,t)+HT(s)(R2)T˜hH(t). (3.3)

    By putting s=1 into (3.3), we have

    Dsg(0,t)=d2(t)d1(t)HT(1)(R2)T˜hH(t). (3.4)

    By substituting (3.4) into (3.3), we get

    g(s,t)d1(t)+s(d2(t)d1(t)HT(1)(R2)T˜hH(t))+HT(s)(R2)T˜hH(t). (3.5)

    Now, we approximate that d1(t)=ST0H(t),d2(t)=ST1H(t) and s=HT(s)S and putting in (3.5), we get

    g(s,t)ST0H(t)+HT(s)S(ST1H(t)ST0H(t)HT(1)(R2)T˜hH(t))+HT(s)(R2)T˜hH(t).

    The above relation can be written as

    g(s,t)1×ST0H(t)+HT(s)S(ST1H(t)ST0H(t)HT(1)(R2)T˜hH(t))+HT(s)(R2)T˜hH(t).

    Approximating 1=ˆSTH(s)=HT(s)ˆS, the above relation is rewritten as

    g(s,t)HT(s)ˆSST0H(t)+HT(s)S(ST1H(t)ST0H(t)HT(1)(R2)T˜hH(t))+HT(x)(R2)T˜hH(t)=HT(s)(ˆSST0+SST1SST0SHT(1)(R2)T˜h+(R2)T˜h)H(t). (3.6)

    Setting ρ1=ˆSST0+SST1SST0SHT(1)(R2)T˜h+(R2)T˜h, we have

    g(s,t)HT(s)ρ1H(t). (3.7)

    According to (1.1), we need to obtain Ds g(s,t). Putting the approximations d1(t),d2(t) and the relation (3.4) into (3.2) yields

    Dsg(s,t)ST1H(t)ST0H(t)HT(1)(R2)T˜hH(t)+HT(s)RT˜hH(t). (3.8)

    The above relation can be written as

    Dsg(s,t)1×ST1H(t)1×ST0H(t)1×HT(1)(R2)T˜hH(t)+HT(s)RT˜hH(t). (3.9)

    Putting 1=HT(s)ˆS into the above relation, we get

    Dsg(s,t)HT(s)ˆSST1H(t)HT(s)ˆSST0H(t)HT(s)ˆSHT(1)(R2)T˜hH(t)+HT(s)RT˜hH(t)=HT(s)(ˆSST1ˆSST0ˆSHT(1)(R2)T˜h+RT˜h)H(t). (3.10)

    Setting ρ2=ˆSST1ˆSST0ˆSHT(1)(R2)T˜h+RT˜h, we have

    Dsg(s,t)HT(s)ρ2H(t). (3.11)

    Applying ABIγt to (1.1), putting g(s,t)HT(s)ρ1H(t),Dsg(s,t)HT(s)ρ2H(t), Dss g(s,t)HT(s)˜hH(t) and approximating ω(s,t)HT(s)ρ3H(t) in (1.1) yields

    HT(s)ρ1H(t)=d0(s)+ϑ1(s)HT(s)ρ2(ABIγtH(t))+ϑ2(s)HT(s)˜h(ABIγtH(t))+HT(s)ρ3(ABIγtH(t)). (3.12)

    Now approximating d0(s)HT(s)S2,ϑ1(s)ST3H(s),ϑ2(s)ST4H(s) and using Theorem 2.4, the above relation can be rewritten as

    HT(s)ρ1H(t)=HT(s)S2+ST3H(s)HT(s)ρ2IγH(t)+ST4H(s)HT(s)˜hIγH(t)+HT(s)ρ3IγH(t). (3.13)

    By Theorem 2.2, the above relation can be written as

    HT(s)ρ1H(t)=HT(s)S2×1+ST3H(s)HT(s)HT(s)^S3ρ2IγH(t)+ST4H(s)HT(s)HT(s)^S4˜hIγH(t)+HT(s)ρ3IγH(t). (3.14)

    Now approximating 1=ˆSTH(t), we have

    HT(s)ρ1H(t)=HT(s)S2ˆSTH(t)+HT(s)^S3ρ2IγH(t)+HT(s)^S4˜hIγH(t)+HT(s)ρ3IγH(t). (3.15)

    We can write the above relation as

    HT(s)(ρ1S2ˆST^S3ρ2Iγ^S4˜hIγρ3Iγ)H(t)=0. (3.16)

    Therefore we have

    ρ1S2ˆST^S3ρ2Iγ^S4˜hIγρ3Iγ=0. (3.17)

    By solving the obtained system, we find hij, i,j=1,2,,N+1. Consequently, g(s,t) can be calculated by using (3.7).

    Set I=(a,b)n,n=2,3 in Rn. The Sobolev norm is given as

    gHϵ(I)=(ϵk=0nl=0D(k)lg2L2(I))12,ϵ1,

    where D(k)lu and Hϵ(I) are the k-th derivative of g and Sobolev space, respectively. The notation |g|Hϵ;N is given as [35]

    |g|Hϵ;N(I)=(ϵk=min{ϵ,N+1}nl=0D(k)lg2L2(I))12.

    Theorem 4.1 (See [36]). Let g(s,t)Hϵ(I) with ϵ1. Considering PNg(s,t)=N+1r=1N+1n=1ar,nPr(s)Pn(t) as the best approximation of g(s,t), we have

    gPNgL2(I)CN1ϵ|g|Hϵ;N(I),

    and if 1ιϵ, then

    gPNgHι(I)CNϑ(ι)ϵ|g|Hϵ;N(I),

    with

    ϑ(ι)={0,ι=0,2ι12,ι>0.

    Lemma 4.1. The AB derivative can be written by using the fractional order RL integral as follows:

    ABDγtg(t)=Φ(γ)1γl=0ϖlRLIlγ+1tg(t),ϖ=γ1γ.

    Proof. According to the definitions of the AB derivative and the RL integral, the proof is complete.

    Theorem 4.2. Suppose that 0<γ1,|ϑ1(s)|τ1,|ϑ2(s)|τ2 and g(s,t)Hϵ(I) with ϵ1. If E(s,t) is the residual error by approximating g(s,t), then E(s,t) can be evaluated as

    E(s,t)L2(I)ϱ1(|g|Hϵ;N(I)+|sg|Hϵ;N(I)),

    where 1ιϵ and ϱ1 is a constant number.

    Proof. According to (1.1),

    ABDγt g(s,t)=ϑ1(s)Ds g(s,t)+ϑ2(s)Dss g(s,t)+ω(s,t), (4.1)

    and

    ABDγt gN(s,t)=ϑ1(s)Ds gN(s,t)+ϑ2(s)Dss gN(s,t)+ω(s,t). (4.2)

    Substituting Eqs (4.1) and (4.2) in E(s,t) yields

    E(s,t)=ABDγt(g(s,t)gN(s,t))+ϑ1(s)Ds(gN(s,t)g(s,t))+ϑ2(s)Dss(gN(s,t)g(s,t)).

    and then

    E(s,t)2L2(I)ABDγt(g(s,t)gN(s,t))2L2(I)+τ1Ds(g(s,t)gN(s,t))2L2(I)+τ2Dss(g(s,t)gN(s,t))2L2(I). (4.3)

    Now, we must find a bound for ABDγt(g(s,t)gN(s,t))L2(I). In view of [26], and by using Lemma 4.1, in a similar way, we write

    ABDγt(g(s,t)gN(s,t))2L2(I)=Φ(γ)1γl=0ϖlRLIlγ+1t(Dtg(s,t)DtgN(s,t))2L2(I)(Φ(γ)1γl=0ϖlΓ(lγ+2))2Dtg(s,t)DtgN(s,t)2L2(I)(Φ(γ)1γEγ,2(ϖ))2g(s,t)gN(s,t)2Hι(I).

    Therefore,

    ABDγt(g(s,t)gN(s,t))L2(I)δ1CNϑ(ι)ϵ|g|Hϵ;N(I), (4.4)

    where Φ(γ)1γEγ,2(ϖ)δ1. Thus, from (4.4), we can write

    ABDγt(g(s,t)gN(s,t))2L2(I)δ1|g|Hϵ;N(I), (4.5)

    where |g|Hϵ;N(I)=CNϑ(ι)ϵ|g|Hϵ;N(I). By Theorem 4.1,

    Ds(g(s,t)gN(s,t))L2(I)CNϑ(ι)ϵ|g|Hϵ;N(I)=|g|Hϵ;N(I), (4.6)

    and

    Dss(g(s,t)gN(s,t))L2(I)=Ds(Ds(g(s,t)gN(s,t)))L2(I)Dsg(s,t)DsgN(s,t)Hι(I)|Dsg|Hϵ;N(I), (4.7)

    where |Dsg|Hϵ;N(I)=CNϑ(ι)ϵ|Dsg|Hϵ;N(I). Taking ϱ1=max{δ1+τ1,τ2} and substituting (4.5)–(4.7) into (4.3); then, the desired result is obtained.

    In this section, the proposed technique which is described in Section 3 is shown to be tested using some numerical examples. The codes are written in Mathematica software.

    Example 5.1. Consider (1.1) with ϑ1(s)=1,ϑ2(s)=0.1 and ω(s,t)=0. The initial and boundary conditions can be extracted from the analytical solution g(s,t)=τ0eτ1tτ2s when γ=1. Setting τ0=1,τ1=0.2,τ2=ϑ1(s)+ϑ21(s)+4ϑ2(s)τ12ϑ2(s), considering N=3 and using the proposed technique, the numerical results of the TF-ADE are reported in Tables 1 and 2, and in Figures 13.

    Table 1.  (Example 5.1) Numerical results of the absolute error when γ=0.99,N=3, t=1.
    s Method of [21] The presented method
    0.1 1.05799e2 3.86477e4
    0.2 1.21467e2 1.33870e4
    0.3 4.94776e3 4.08507e5
    0.4 2.35280e4 1.48842e4
    0.5 2.36604e3 2.01089e4
    0.6 1.08676e2 2.08410e4
    0.7 2.18851e2 1.81459e4
    0.8 2.91950e2 1.30730e4
    0.9 2.49148e2 6.65580e5

     | Show Table
    DownLoad: CSV
    Table 2.  (Example 5.1) Numerical results of the absolute error when γ=0.99,N=3, s=0.75.
    t Method of [19] The presented method
    0.1 1.13874e3 2.15272e3
    0.2 1.41664e3 2.32350e3
    0.3 1.62234e3 2.30934e3
    0.4 1.76917e3 2.14768e3
    0.5 1.87045e3 1.87583e3
    0.6 1.93953e3 1.53092e3
    0.7 1.98971e3 1.14997e3
    0.8 2.03434e3 7.69801e4
    0.9 2.08671e3 4.27112e4

     | Show Table
    DownLoad: CSV
    Figure 1.  (Example 5.1) The absolute error at some selected points when (a) γ=0.8, (b) γ=0.9, (c) γ=0.99, (d) γ=1.
    Figure 2.  (Example 5.1) Error contour plots when (a) γ=0.99, (b) γ=1, (c) γ=0.8, (d) γ=0.9.
    Figure 3.  (Example 5.1) The absolute error at some selected points when (a) γ=0.8, (b) γ=0.9, (c) γ=0.99, (d) γ=1.

    Example 5.2. Consider (1.1) with ϑ1(s)=s,ϑ2(s)=s22 and ω(s,t)=0. The initial and boundary conditions can be extracted from the analytical solution g(s,t)=sEα(tα). By setting N=5 and using the proposed technique, the numerical results of the TF–KE are as reported in Figures 46.

    Figure 4.  (Example 5.2) The absolute error at some selected points when (a) γ=0.7, (b) γ=0.8, (c) γ=0.9, (d) γ=1.
    Figure 5.  (Example 5.2) Error contour plots when (a) γ=0.7, (b) γ=0.8, (c) γ=0.9, (d) γ=1.
    Figure 6.  (Example 5.2) The absolute error at some selected points when (a) γ=0.7, (b) γ=0.8, (c) γ=0.9, (d) γ=1.

    Time fractional Kolmogorov equations and time fractional advection-diffusion equations have been used to model many problems in mathematical physics and many scientific applications. Developing efficient methods for solving such equations plays an important role. In this paper, a proposed technique is used to solve TF-ADEs and TF-KEs. This technique reduces the problems under study to a set of algebraic equations. Then, solving the system of equations will give the numerical solution. An error estimate is provided. This method was tested on a few examples of TF-ADEs and TF-KEs to check the accuracy and applicability. This method might be applied for system of fractional order integro-differential equations and partial differential equations as well.

    The authors declare that they have not used artificial intelligence tools in the creation of this article.

    The authors would like to thank for the support from Scientific Research Fund Project of Yunnan Provincial Department of Education, No. 2022J0949. The authors also would like to thank the anonymous reviewers for their valuable and constructive comments to improve our paper.

    The authors declare there is no conflicts of interest.



    [1] P. Branco, L. Torgo, R. P. Ribeiro, A survey of predictive modeling on imbalanced domains, ACM Comput. Surv., 49 (2016), 1–50. https://doi.org/10.1145/2907070 doi: 10.1145/2907070
    [2] H. Guo, Y. Li, J. Shang, M. Gu, Y. Huang, B. Gong, Learning from class-imbalance data: Review of methods and applications, Expert Syst. Appl., 73 (2017), 220–239. https://doi.org/10.1016/j.eswa.2016.12.035 doi: 10.1016/j.eswa.2016.12.035
    [3] Y. Qian, S. Ye, Y. Zhang, J. Zhang, SUMO-Forest: A Cascade Forest based method for the prediction of SUMOylation sites on imbalanced data, Gene, 741 (2020), 144536. https://doi.org/10.1016/j.gene.2020.144536 doi: 10.1016/j.gene.2020.144536
    [4] P. D. Mahajan, A. Maurya, A. Megahed, A. Elwany, R. Strong, J. Blomberg, Optimizing predictive precision in imbalanced datasets for actionable revenue change prediction, Eur. J. Oper. Res., 285 (2020), 1095–1113. https://doi.org/10.1016/j.ejor.2020.02.036 doi: 10.1016/j.ejor.2020.02.036
    [5] G. Chen, Z. Ge, SVM-tree and SVM-forest algorithms for imbalanced fault classification in industrial processes, IFAC J. Syst. Control, 8 (2019), 100052. https://doi.org/10.1016/j.ifacsc.2019.100052 doi: 10.1016/j.ifacsc.2019.100052
    [6] P. Wang, F. Su, Z. Zhao, Y. Guo, Y. Zhao, B. Zhuang, Deep class-skewed learning for face recognition, Neurocomputing, 363 (2019), 35–45. https://doi.org/10.1016/j.neucom.2019.04.085 doi: 10.1016/j.neucom.2019.04.085
    [7] Y. S. Li, H. Chi, X. Y. Shao, M. L. Qi, B. G. Xu, A novel random forest approach for imbalance problem in crime linkage, Knowledge-Based Syst., 195 (2020), 105738. https://doi.org/10.1016/j.knosys.2020.105738 doi: 10.1016/j.knosys.2020.105738
    [8] S. Barua, M. M. Islam, X. Yao, K. Murase, MWMOTE-majority weighted minority oversampling technique for imbalanced data set learning, IEEE Trans. Knowl. Data Eng., 26 (2012), 405–425. https://doi.org/10.1109/TKDE.2012.232 doi: 10.1109/TKDE.2012.232
    [9] G. E. A. P. A. Batista, R. C. Prati, M. C. Monard, A study of the behavior of several methods for balancing machine learning training data, ACM SIGKDD Explorations Newsl., 6 (2004), 20–29. https://doi.org/10.1145/1007730.1007735 doi: 10.1145/1007730.1007735
    [10] K. E. Bennin, J. Keung, P. Phannachitta, A. Monden, S. Mensah, MAHAKIL: diversity based oversampling approach to alleviate the class imbalance issue in software defect prediction, IEEE Trans. Software Eng., 44 (2017), 534–550. https://doi.org/10.1109/TSE.2017.2731766 doi: 10.1109/TSE.2017.2731766
    [11] N. V. Chawla, K. W. Bowyer, L. O. Hall, W. P. Kegelmeyer, SMOTE: Synthetic minority over-sampling technique, J. Artif. Intell. Res., 16 (2002), 321–357. https://doi.org/10.1613/jair.953 doi: 10.1613/jair.953
    [12] M. Zheng, T. Li, X. Zheng, Q. Yu, C. Chen, D. Zhou, et al., UFFDFR: Undersampling framework with denoising, fuzzy c-means clustering, and representative sample selection for imbalanced data classsification, Inf. Sci., 576 (2021), 658–680. https://doi.org/10.1016/j.ins.2021.07.053 doi: 10.1016/j.ins.2021.07.053
    [13] G. Ahn, Y. J. Park, S. Hur, A membership probability-based undersampling algorithm for imbalanced data, J. Classif., 38 (2021), 2–15. https://doi.org/10.1007/s00357-019-09359-9 doi: 10.1007/s00357-019-09359-9
    [14] M. Li, A. Xiong, L. Wang, S. Deng, J. Ye, ACO Resampling: Enhancing the performance of oversampling methods for class imbalance classification, Knowledge-Based Syst., 196 (2020), 105818. https://doi.org/10.1016/j.knosys.2020.105818 doi: 10.1016/j.knosys.2020.105818
    [15] T. Pan, J. Zhao, W. Wu, J. Yang, Learning imbalanced datasets based on SMOTE and Gaussian distribution, Inf. Sci., 512 (2020), 1214–1233. https://doi.org/10.1016/j.ins.2019.10.048 doi: 10.1016/j.ins.2019.10.048
    [16] T. Zhang, Y. Li, X. Wang, Gaussian prior based adaptive synthetic sampling with non-linear sample space for imbalanced learning, Knowledge-Based Syst., 191 (2020), 105231. https://doi.org/10.1016/j.knosys.2019.105231 doi: 10.1016/j.knosys.2019.105231
    [17] R. Batuwita, V. Palade, FSVM-CIL: Fuzzy support vector machines for class imbalance learning, IEEE Trans. Fuzzy Syst., 18 (2010), 558–571. https://doi.org/10.1109/TFUZZ.2010.2042721 doi: 10.1109/TFUZZ.2010.2042721
    [18] C. L. Castro, A. P. Braga, Novel cost-sensitive approach to improve the multilayer perceptron performance on imbalanced data, IEEE Trans. Neural Networks Learn. Syst., 24 (2013), 888–899. https://doi.org/10.1109/TNNLS.2013.2246188 doi: 10.1109/TNNLS.2013.2246188
    [19] S. Datta, S. Das, Near-Bayesian Support Vector Machines for imbalanced data classification with equal or unequal misclassification costs, Neural Networks, 70 (2015), 39–52. https://doi.org/10.1016/j.neunet.2015.06.005 doi: 10.1016/j.neunet.2015.06.005
    [20] H. Yu, C. Mu, C. Sun, W. Yang, X. Yang, X. Zuo, Support vector machine-based optimized decision threshold adjustment strategy for classifying imbalanced data, Knowledge-Based Syst., 76 (2015), 67–78. https://doi.org/10.1016/j.knosys.2014.12.007 doi: 10.1016/j.knosys.2014.12.007
    [21] H. Yu, C. Sun, X. Yang, W. Yang, J. Shen, Y. Qi, ODOC-ELM: Optimal decision outputs compensation-based extreme learning machine for classifying imbalanced data, Knowledge-Based Syst., 92 (2016), 55–70. https://doi.org/10.1016/j.knosys.2015.10.012 doi: 10.1016/j.knosys.2015.10.012
    [22] Z. H. Zhou, X. Y. Liu, Training cost-sensitive neural networks with methods addressing the class imbalance problem, IEEE Trans. Knowl. Data Eng., 18 (2006), 63–77. https://doi.org/10.1109/TKDE.2006.17 doi: 10.1109/TKDE.2006.17
    [23] D. Devi, S. K. Biswas, B. Purkayastha, Learning in presence of class imbalance and class overlapping by using one-class SVM and undersampling technique, Connect. Sci., 31 (2019), 105–142. https://doi.org/10.1080/09540091.2018.1560394 doi: 10.1080/09540091.2018.1560394
    [24] R. Barandela, R. M. Valdovinos, J. S. Sanches, New applications of ensemble of classifiers, Pattern Anal. Appl., 6 (2003), 245–256. https://doi.org/10.1007/s10044-003-0192-z doi: 10.1007/s10044-003-0192-z
    [25] N. V. Chawla, A. Lazarevic, L. O. Hall, K. W. Bowyer, SMOTEBoost: Improving prediction of the minority class in Boosting, in Knowledge Discovery in Databases: PKDD 2003, (2003), 107–119. https://doi.org/10.1007/978-3-540-39804-2_12
    [26] G. Collell, D. Prelec, K. R. Patil, A simple plug-in bagging ensemble based on threshold-moving for classifying binary and multiclass imbalanced data, Neurocomputing, 275 (2018), 330–340. https://doi.org/10.1016/j.neucom.2017.08.035 doi: 10.1016/j.neucom.2017.08.035
    [27] W. Fan, S. J. Stolfo, J. Zhang, P. K. Chan, AdaCost: Misclassification cost-sensitive boosting, in International Conference of Machine Learning, (1999), 97–105. Available from: http://ids.cs.columbia.edu/sites/default/files/Adacost_Imbalanced_classes.pdf.
    [28] M. Galar, A. Fernandez, E. Barrenechea, F. Herrera, EUSBoost: Enhancing ensembles for highly imbalanced data-sets by eevolutionary undersampling, Pattern Recognit., 46 (2013), 3460–3471. https://doi.org/10.1016/j.patcog.2013.05.006 doi: 10.1016/j.patcog.2013.05.006
    [29] P. Lim, C. K. Goh, K. C. Tan, Evolutionary Cluster-Based Synthetic Oversampling Ensemble (ECO-Ensemble) for imbalance learning, IEEE Trans. Cybern., 47 (2016), 2850–2861. https://doi.org/10.1109/TCYB.2016.2579658 doi: 10.1109/TCYB.2016.2579658
    [30] X. Y. Liu, J. Wu, Z. H. Zhou, Exploratory undersampling for class-imbalance learning, IEEE Trans. Syst. Man Cybern. Part B Cybern., 39 (2008), 539–550. https://doi.org/10.1109/TSMCB.2008.2007853 doi: 10.1109/TSMCB.2008.2007853
    [31] S. E. Roshan, S. Asadi, Improvement of Bagging performance for classification of imbalanceed datasets using evolutionary multi-objective optimization, Eng. Appl. Artif. Intell., 87 (2020), 103319. https://doi.org/10.1016/j.engappai.2019.103319 doi: 10.1016/j.engappai.2019.103319
    [32] A. Roy, R. M. O. Cruz, R. Sabourin, G. D. C. Cavalcanti, A study on combining dynamic selection and data preprocessing for imbalance learning, Neurocomputing, 286 (2018), 179–192. https://doi.org/10.1016/j.neucom.2018.01.060 doi: 10.1016/j.neucom.2018.01.060
    [33] C. Seiffert, T. M. Khoshgoftaar, J. V. Hulse, A. Napolitano, RUSBoost: A hybrid approach to alleviating class imbalance, IEEE Trans. Syst. Man Cybern. Part A Syst. Humans, 40 (2009), 185–197. https://doi.org/10.1109/TSMCA.2009.2029559 doi: 10.1109/TSMCA.2009.2029559
    [34] Y. Sun, M. S. Kamel, A. K. C. Wong, Y. Wang, Cost-sensitive boosting for classification of imbalanced data, Pattern Recognit., 40 (2007), 3358–3378. https://doi.org/10.1016/j.patcog.2007.04.009 doi: 10.1016/j.patcog.2007.04.009
    [35] B. Tang, H. He, GIR-based ensemble sampling approaches for imbalanced learning, Pattern Recognit., 71 (2017), 306–319. https://doi.org/10.1016/j.patcog.2017.06.019 doi: 10.1016/j.patcog.2017.06.019
    [36] D. Tao, X. Tang, X. Li, X. Wu, Asymmetric bagging and random subspace for support vector machines-based relevance feedback in image retrieval, IEEE Trans. Pattern Anal. Mach. Intell., 28 (2006), 1088–1099. https://doi.org/10.1109/TPAMI.2006.134 doi: 10.1109/TPAMI.2006.134
    [37] S. Wang, X. Yao, Diversity analysis on imbalanced data sets by using ensemble models, in 2009 IEEE Symposium on Computational Intelligence and Data Mining, (2009), 324–331. https://doi.org/10.1109/CIDM.2009.4938667
    [38] H. Yu, J. Ni, An improved ensemble learning method for classifying high-dimensional and imbalanced biomedicine data, IEEE/ACM Trans. Comput. Biol. Bioinf., 11 (2014), 657–666. https://doi.org/10.1109/TCBB.2014.2306838 doi: 10.1109/TCBB.2014.2306838
    [39] H. G. Zefrehi, H. Altincay, Imbalance learning using heterogeneous ensembles, Expert Syst. Appl., 142 (2020), 113005. https://doi.org/10.1016/j.eswa.2019.113005 doi: 10.1016/j.eswa.2019.113005
    [40] J. F. Díez-Pastor, J. J. Rodríguez, C. I. García-Osorio, L. I. Kuncheva, Diversity techniques improve the performance of the best imbalance learning ensembles, Inf. Sci., 325 (2015), 98–117. https://doi.org/10.1016/j.ins.2015.07.025 doi: 10.1016/j.ins.2015.07.025
    [41] Z. H. Zhou, J. Wu, W. Tang, Ensembling neural networks: many could be better than all, Artif. Intell., 137 (2002), 239–263. https://doi.org/10.1016/S0004-3702(02)00190-X doi: 10.1016/S0004-3702(02)00190-X
    [42] I. Triguero, S. González, J. M. Moyano, S. García, J. Alcalá-Fdez, J. Luengo, et al., KEEL 3.0: An open source software for multi-stage analysis in data mining, Int. J. Comput. Intell. Syst., 10 (2017), 1238–1249. https://doi.org/10.2991/ijcis.10.1.82 doi: 10.2991/ijcis.10.1.82
    [43] C. Blake, E. Keogh, C. J. Merz, UCI repository of machine learning databases, 1998. Available from: https://cir.nii.ac.jp/crid/1572543025422228096#citations_container.
    [44] L. Breiman, Bagging predictors, Mach. Learn., 24 (1996), 123–140. https://doi.org/10.1007/BF00058655 doi: 10.1007/BF00058655
    [45] R. E. Schapire, A brief introduction to boosting, in Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, (1999), 1401–1406. Available from: https://citeseerx.ist.psu.edu/document?repid = rep1 & type = pdf & doi = fa329f834e834108ccdc536db85ce368fee227ce.
    [46] L. Breiman, Random forests, Mach. Learn., 45 (2001), 5–32. https://doi.org/10.1023/A:1010933404324 doi: 10.1023/A:1010933404324
    [47] T. K. Ho, The random subspace method for constructing decision forests, IEEE Trans. Pattern Anal. Mach. Intell., 20 (1998), 832–844. https://doi.org/10.1109/34.709601 doi: 10.1109/34.709601
    [48] S. A. Gilpin, D. M. Dunlavy, Relationships between accuracy and diversity in heterogeneous ensemble classifiers, 2009.
    [49] K. W. Hsu, J. Srivastava, Diversity in combinations of heterogeneous classifiers, in PAKDD 2009: Advances in Knowledge Discovery and Data Mining, (2009), 923–932. https://doi.org/10.1007/978-3-642-01307-2_97
    [50] R. M. O. Cruz, R. Sabourin, G. D. C. Cavalcanti, Dynamic classifier selection: Recent advances and perspectives, Inf. Fusion, 41 (2018), 195–216. https://doi.org/10.1016/j.inffus.2017.09.010 doi: 10.1016/j.inffus.2017.09.010
    [51] É. N. de Souza, S. Matwin, Extending adaboost to iteratively vary its base classifiers, in Canadian AI 2011: Advances in Artificial Intelligence, (2011), 384–389. https://doi.org/10.1007/978-3-642-21043-3_46
    [52] D. Whitley, A genetic algorithm tutorial, Stat. Comput., 4 (1994), 65–85. https://doi.org/10.1007/BF00175354 doi: 10.1007/BF00175354
    [53] J. Demsar, Statistical comparisons of classifiers over multiple data sets, J. Mach. Learn. Res., 7 (2006), 1–30. Available from: https://www.jmlr.org/papers/volume7/demsar06a/demsar06a.pdf.
    [54] S. García, A. Fernández, J. Luengo, F. Herrera, Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power, Inf. Sci., 180 (2010), 2044–2064. https://doi.org/10.1016/j.ins.2009.12.010 doi: 10.1016/j.ins.2009.12.010
  • This article has been cited by:

    1. Rashid Nawaz, Nicholas Fewster-Young, Nek Muhammad Katbar, Nasir Ali, Laiq Zada, Rabha W. Ibrahim, Wasim Jamshed, Haifa Alqahtani, Numerical inspection of (3 + 1)- perturbed Zakharov–Kuznetsov equation via fractional variational iteration method with Caputo fractional derivative, 2024, 85, 1040-7790, 1162, 10.1080/10407790.2023.2262123
    2. Sakthi I, Raja Das, Bala Anki Reddy P, Entropy generation analysis on MHD flow of second-grade hybrid nanofluid over a porous channel with thermal radiation, 2024, 85, 1040-7790, 623, 10.1080/10407790.2023.2252600
    3. Zafar Hayat Khan, Waqar A. Khan, Ilyas Khan, Farhad Ali, Transient Newtonian fluid flow in a two-dimensional channel with convection and viscous dissipation, 2025, 1040-7790, 1, 10.1080/10407790.2024.2393271
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1625) PDF downloads(73) Cited by(3)

Figures and Tables

Figures(4)  /  Tables(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog