Loading [MathJax]/jax/element/mml/optable/GeneralPunctuation.js
Research article Special Issues

Automatic recognition of white blood cell images with memory efficient superpixel metric GNN: SMGNN


  • Received: 14 October 2023 Revised: 19 December 2023 Accepted: 22 December 2023 Published: 10 January 2024
  • An automatic recognizing system of white blood cells can assist hematologists in the diagnosis of many diseases, where accuracy and efficiency are paramount for computer-based systems. In this paper, we presented a new image processing system to recognize the five types of white blood cells in peripheral blood with marked improvement in efficiency when juxtaposed against mainstream methods. The prevailing deep learning segmentation solutions often utilize millions of parameters to extract high-level image features and neglect the incorporation of prior domain knowledge, which consequently consumes substantial computational resources and increases the risk of overfitting, especially when limited medical image samples are available for training. To address these challenges, we proposed a novel memory-efficient strategy that exploits graph structures derived from the images. Specifically, we introduced a lightweight superpixel-based graph neural network (GNN) and broke new ground by introducing superpixel metric learning to segment nucleus and cytoplasm. Remarkably, our proposed segmentation model superpixel metric graph neural network (SMGNN) achieved state of the art segmentation performance while utilizing at most 10000× less than the parameters compared to existing approaches. The subsequent segmentation-based cell type classification processes showed satisfactory results that such automatic recognizing algorithms are accurate and efficient to execeute in hematological laboratories. Our code is publicly available at https://github.com/jyh6681/SPXL-GNN.

    Citation: Yuanhong Jiang, Yiqing Shen, Yuguang Wang, Qiaoqiao Ding. Automatic recognition of white blood cell images with memory efficient superpixel metric GNN: SMGNN[J]. Mathematical Biosciences and Engineering, 2024, 21(2): 2163-2188. doi: 10.3934/mbe.2024095

    Related Papers:

    [1] Abdul Samad, Imran Siddique, Fahd Jarad . Meshfree numerical integration for some challenging multi-term fractional order PDEs. AIMS Mathematics, 2022, 7(8): 14249-14269. doi: 10.3934/math.2022785
    [2] Abdul Samad, Imran Siddique, Zareen A. Khan . Meshfree numerical approach for some time-space dependent order partial differential equations in porous media. AIMS Mathematics, 2023, 8(6): 13162-13180. doi: 10.3934/math.2023665
    [3] Anwar Ahmad, Dumitru Baleanu . On two backward problems with Dzherbashian-Nersesian operator. AIMS Mathematics, 2023, 8(1): 887-904. doi: 10.3934/math.2023043
    [4] Anumanthappa Ganesh, Swaminathan Deepa, Dumitru Baleanu, Shyam Sundar Santra, Osama Moaaz, Vediyappan Govindan, Rifaqat Ali . Hyers-Ulam-Mittag-Leffler stability of fractional differential equations with two caputo derivative using fractional fourier transform. AIMS Mathematics, 2022, 7(2): 1791-1810. doi: 10.3934/math.2022103
    [5] M. Ali Akbar, Norhashidah Hj. Mohd. Ali, M. Tarikul Islam . Multiple closed form solutions to some fractional order nonlinear evolution equations in physics and plasma physics. AIMS Mathematics, 2019, 4(3): 397-411. doi: 10.3934/math.2019.3.397
    [6] M. Hafiz Uddin, M. Ali Akbar, Md. Ashrafuzzaman Khan, Md. Abdul Haque . New exact solitary wave solutions to the space-time fractional differential equations with conformable derivative. AIMS Mathematics, 2019, 4(2): 199-214. doi: 10.3934/math.2019.2.199
    [7] Yudhveer Singh, Devendra Kumar, Kanak Modi, Vinod Gill . A new approach to solve Cattaneo-Hristov diffusion model and fractional diffusion equations with Hilfer-Prabhakar derivative. AIMS Mathematics, 2020, 5(2): 843-855. doi: 10.3934/math.2020057
    [8] Dumitru Baleanu, Babak Shiri . Generalized fractional differential equations for past dynamic. AIMS Mathematics, 2022, 7(8): 14394-14418. doi: 10.3934/math.2022793
    [9] Aslı Alkan, Halil Anaç . The novel numerical solutions for time-fractional Fornberg-Whitham equation by using fractional natural transform decomposition method. AIMS Mathematics, 2024, 9(9): 25333-25359. doi: 10.3934/math.20241237
    [10] M. TarikulIslam, M. AliAkbar, M. Abul Kalam Azad . Traveling wave solutions in closed form for some nonlinear fractional evolution equations related to conformable fractional derivative. AIMS Mathematics, 2018, 3(4): 625-646. doi: 10.3934/Math.2018.4.625
  • An automatic recognizing system of white blood cells can assist hematologists in the diagnosis of many diseases, where accuracy and efficiency are paramount for computer-based systems. In this paper, we presented a new image processing system to recognize the five types of white blood cells in peripheral blood with marked improvement in efficiency when juxtaposed against mainstream methods. The prevailing deep learning segmentation solutions often utilize millions of parameters to extract high-level image features and neglect the incorporation of prior domain knowledge, which consequently consumes substantial computational resources and increases the risk of overfitting, especially when limited medical image samples are available for training. To address these challenges, we proposed a novel memory-efficient strategy that exploits graph structures derived from the images. Specifically, we introduced a lightweight superpixel-based graph neural network (GNN) and broke new ground by introducing superpixel metric learning to segment nucleus and cytoplasm. Remarkably, our proposed segmentation model superpixel metric graph neural network (SMGNN) achieved state of the art segmentation performance while utilizing at most 10000× less than the parameters compared to existing approaches. The subsequent segmentation-based cell type classification processes showed satisfactory results that such automatic recognizing algorithms are accurate and efficient to execeute in hematological laboratories. Our code is publicly available at https://github.com/jyh6681/SPXL-GNN.



    Fractional calculus (FC) is an appropriate tool to describe the physical properties of materials. Currently mathematical theories and realistic applications of FC are recognized. Thus fractional-order differential equations (FDEs) have been established for modeling of real phenomena in various fields such as physics, engineering, mechanics, control theory, economics, medical science, finance and etc. There are a lot of studies which deal with the numerical methods for FDEs. Modeling of spring pendulum in fractional sense and its numerical solution [1], numerical solution of fractional reaction-diffusion problem [2], study of the motion equation of a capacitor microphone in fractional calculus [3], fractional modeling and numerical solution of two coupled pendulums[4], fractional telegraph equation with different types of solution such as fractional homotopy analysis transform method [5] and homotopy perturbation transform method [6] can be found in literatures.

    Time fractional telegraph equation with Riesz space fractional derivative is a typical fractional diffusion- wave equation which is applied in signal analysis and the modeling of the reaction diffusion and the random walk of suspension flows and so on. Lately numerical methods such as finite element approximation [7], L1/L2 – approximation method, the standard/shifted Grunwald method and the matrix transform method [8], Chebyshev Tau approximation [9], radial basis function approximation [10], Low-rank solvers [11], and some other useful methods are used to find the approximate solutions of fractional diffusion equation with Riesz space fractional derivative. Differential transform method (DTM) is a reliable and effective method which was constructed by Zhoue [12] for solving linear and nonlinear differential equations arising in electrical circuit problems. This method constructs an analytical solution in the form of a polynomial based on Taylor expansion, which is different from traditional high order Taylor series method. DTM makes an iterative procedure to obtain Taylor series expansion, which needs less computational time compared to the Taylor series method. DTM was extended by Chen and Ho [13] into two dimensional functions for solving partial differential equations. One and two dimensional differential transform methods were generalized for ordinary and partial differential equations of fractional order (GDTM) by Odibat and Momani [14,15]. DTM and GDTM were applied by many authors to find approximate solution for different kind of equations. S. Ray [16] used the modification of GDTM to find numerical solution for KdV type equation [17]. More over, Soltanalizadeh, Sirvastava and Garg [18,19,20] applied GDTM and DTM to find exact and numerical solutions for telegraph equations in the cases of one-space dimensional, two and three dimensional and space –time fractional derivatives. Furthermore GDTM were applied to find numerical approximations of time fractional diffusion equation and differential-difference equations [21,22], as well as DTM was used to obtain approximate solution of telegraph equation [23].

    In previous studies all types of partial differential equations such as telegraph equation, which were solved by DTM and GDTM were not involved Riesz space fractional derivative operator. This work investigates an improved scheme for fractional partial differential equations with Riesz space fractional derivative to obtain semi analytical solution for these types of equations. For the simplicity we consider a kind of telegraph equation with Riesz operator on a finite one dimensional domain in the form:

    C0D2βtu(x,t)+2kC0Dβtu(x,t)+v2u(x,t)ηγu(x,t)|x|γ=f(x,t),0<β1 (1.1)
    axb,0tT,

    subject to the initial conditions

    u(x,0)=ϕ(x),u(x,0)t=ψ(x),axb,

    and boundary conditions

    u(a,t)=u(b,t)=0,0tT,

    where k>v0 and η>0 are constants.

    Generally the Riesz space-fractional operator γ|x|γ over [a,b] is defined by:

    γ|x|γu(x,t)=12cosγπ21Γ(nγ)nxnbau(s,t)ds|xs|γ1,n1<γ<n,nN

    We suppose that 1<γ<2, and u(x,t),f(x,t),ϕ(x) and ψ(x) are real-valued and sufficiently well-behaved functions.

    The main goal of this paper is providing an improved scheme based on GDTM to obtain numerical solution for Eq (1.1). To implement this technique we separate the main equation into sub equations by means of new theorems and definitions. This separation enable to derive a system of fractional differential equations. Then we apply improved GDTM for system of fractional differential equations to obtain system of recurrence relations. The solution will be attained with inverse transformation to the last system.

    The rest of this study is organized as follows. Section 2 discusses preliminary definitions of fractional calculus. In section 3 we give some knowledge of two dimensional generalized differential transform method and two dimensional differential. transform method, afterward in section 4 the implementation of the method accompanied by new definitions and theorems are described while the equation contains Riesz space fractional derivative. Section 5 provides an error bound for two-dimensional GDTM. In section 6 some numerical examples are presented to demonstrate the efficiency and convenience of the theoretical results. The concluding remarks are given in section 7.

    In this section some necessary definitions of fractional calculus are introduced. Since the Riemann-Liouville and the Caputo derivatives are often used, as well as the Riesz fractional derivative is defined based on left and right Riemann-Liouville derivatives, we focus on these definitions of fractional calculus. Furthermore in the modeling of most physical problems, the initial conditions are given in integer order derivatives and the integer order derivatives are coincided with Caputo initial conditions definition; therefore the Caputo derivative is used in numerical algorithm.

    Definition 2.1. The left and right Riemann-Liouville integrals of order α>0 for a function f(x) on interval (a,b) are defined as follows,

    {aJαxf(x)=1Γ(α)xaf(s)(xs)1αds,xJαbf(x)=1Γ(α)bxf(s)(sx)1αds,

    where Γ(z)=0ettz1dt,zC is the Gamma function.

    Definition 2.2. The left and right Riemann-Liouville derivatives of order α>0 for a function f(x) defined on interval (a,b) are given as follows,

    {RaDαxf(x)=1Γ(mα)dmdxmxa(xs)mα1f(s)ds,RxDαbf(x)=(1)mΓ(mα)dmdxmbx(sx)mα1f(s)ds,

    where m1<αm.

    Remark 2.3. From the Riemann-Liouville derivatives definition and the definition of the Riesz space fractional derivative one can conclude for 0xL

    γ|x|γu(x,t)=ξγ(R0Dγx+RxDγL)u(x,t) while ξγ=12cosγπ2,γ1, and R0Dγx and RxDγL are left and right Riemann-Liouville derivatives.

    Definition 2.4. The left and right Caputo derivatives of order α for a function f(x) are defined as

    {CaDαx=1Γ(mα)xa(xs)mα1dmf(s)dsmds,m1<αm,CxDαb=(1)mΓ(mα)bx(sx)mα1dmf(s)dsmds,m1<αm.

    There are relations between Riemann-Liouville and Caputo derivatives as follows:

    CaDαx=RaDαxf(x)m1k=0f(k)(a)(xa)kαΓ(1+kα), (2.1)
    CxDαb=RxDαbf(x)m1k=0f(k)(b)(bx)kαΓ(1+kα), (2.2)

    It is clear that if f(k)(a)=0,k=0,1,...,m1 then the Riemann-Liouville and Caputo derivatives are equal. For comprehensive properties of fractional derivatives and integrals one can refer to the literatures [24,25].

    Generalization of differential transform method for partial differential equation of fractional order was proposed by Odibat et al. [15]. This method was based on two-dimensional differential transform method, generalized Taylor's formula and Caputo fractional derivatives.

    Consider a function of two variables u(x,t) and suppose that it can be represented as a product of two single-variable functions, i.e u(x,t)=f(x)g(t). If u(x,t) is analytic and can be differentiated continuously with respect to x and t in the domain of interest, then the function u(x,t) is represented as

    u(x,t)=k=0Fα(k)(xx0)αkh=0Gβ(h)(tt0)βh=k=0h=0Uα,β(k,h)(xx0)αk(tt0)βh, (3.1)

    where 0<α,β1,Uα,β(k,h)=Fα(k)Gβ(h) is called the spectrum of u(x,t). The generalized two-dimensional differential transform of the function u(x,t) is as follows

    Uα,β(k,h)=1Γ(αk+1)Γ(βh+1)[(Dαx0)k(Dβt0)hu(x,t)](x0,t0) (3.2)

    where (Dαx0)k=Dαx0Dαx0Dαx0,ktimes, and Dαx0 represents the derivative operator of Caputo definition.

    On the base of notations (3.1) and (3.2), we have the following results.

    Theorem 3.1. Suppose that Uα,β(k,h),Vα,β(k,h) and Wα,β(k,h) are differential transformations of the functions u(x,t),v(x,t)andw(x,t), respectively; then

    (a) If u(x,t)=v(x,t)±w(x,t) then Uα,β(k,h)=Vα,β(k,h)±Wα,β(k,h),

    (b) If u(x,t)=cv(x,t),cR then Uα,β(k,h)=cVα,β(k,h),

    (c) If u(x,t)=v(x,t)w(x,t) then Uα,β(k,h)=kr=0hs=0Vα,β(r,hs)Wα,β(kr,s),

    (d) If u(x,t)=(xx0)nα(tt0)mβ then Uα,β(k,h)=δ(kn)δ(hm),

    (e) If u(x,t)=Dαx0v(x,t),0<α1 then Uα,β(k,h)=Γ(α(k+1)+1)Γ(αk+1)Vα,β(k+1,h)

    Theorem 3.2. Suppose that u(x,t)=f(x)g(t) and the function f(x)=xλh(x), where λ>1 and h(x) has the generalized Taylor series expansion i.e. h(x)=n=0an(xx0)αk, and

    (a) β<λ+1 and α is arbitrary or

    (b) βλ+1,α is arbitrary, and ak=0 for k=0,1,..,m1, where m1βm.

    Then the generalized differential transform (3.2) becomes

    Uα,β(k,h)=1Γ(αk+1)Γ(βh+1)[Dαkx0(Dβt0)hu(x,t)](x0,t0)

    Theorem 3.3. Suppose that u(x,t)=f(x)g(t) and the function f(x) satisfies the condition given in theorem 3.2 and u(x,t)=Dγx0v(x,t) then

    Uα,β(k,h)=Γ(αk+γ+1)Γ(αk+1)Vα,β(k+γ/α,h)

    where α is selected such that γ/αZ+.

    The method and proofs of the theorems 3.1, 3.2 and 3.3 are well addressed in literature [14].

    In case α=β=1, GDTM reduces to DTM which is briefly introduced as follows.

    Definition 3.4. Suppose that u(x,t) is analytic and continuously differentiable with respect to space x and time t in the domain of interest, then

    U(k,h)=1k!h![k+hxkthu(x,t)](x0,t0),

    where the spectrum function U(k,h) is the transformed function, which also is called T-function and u(x,t) is the original function. The inverse differential transform of U(k,h) is defined as

    u(x,t)=k=0h=0U(k,h)(xx0)k(tt0)h.

    From these equations we have

    u(x,t)=1k!h![k+hxkthu(x,t)](x0,t0)(xx0)k(tt0)h.

    Definition 3.4 eventuates the following relations

    (a) If u(x,t)=v(x,t)±w(x,t) then U(k,h)=V(k,h)±W(k,h),

    (b) If u(x,t)=cv(x,t) then U(k,h)=cV(k,h),

    (c) If u(x,t)=xv(x,t) then U(k,h)=(k+1)V(k+1,h),

    (d) If u(x,t)=r+sxrtsv(x,t) then U(k,h)=(k+r)!(h+s)!k!h!V(k+r,h+s),

    (e) If u(x,t)=v(x,t)w(x,t) then U(k,h)=kr=0hs=0V(r,hs)W(kr,s).

    The basic definitions and fundamental operations of two-dimensional differential transform can be found in literature [13].

    To gain an algorithm for applying GDTM to Eq (1.1) we need some new theorems and definitions which are presented as follows.

    Theorem 4.1. Suppose u(x,t)=k=0h=0U(k,h)xkth and v(x,t)=R0Dγxu(x,t), 1<γ<2, is the left Riemann-Liouville derivative then

    v(x,t)=k=0h=0U(k,h)Γ(k+1)Γ(k+1γ)xkγth.

    Proof. By replacing u(x,t)=k=0h=0U(k,h)xkth in the left Riemann-Liouville derivative definition it is easy to achieve

    v(x,t)=1Γ(2γ)2x2x0k=0h=0U(k,h)ξkth(xξ)γ1dξ=1Γ(2γ)k=0h=0U(k,h)th2x2x0ξk(xξ)γ1dξ=1Γ(2γ)k=0h=0U(k,h)th2x2(L1(L(x0ξk(xξ)γ1dξ))),=1Γ(2γ)k=0h=0U(k,h)th2x2(L1(L(xkx1γ)),=1Γ(2γ)k=0h=0U(k,h)th2x2(L1(Γ(k+1)sk+1Γ(2γ)s2γ))=k=0h=0U(k,h)th2x2(Γ(k+1)Γ(k+3γ)xk+2γ)=k=0h=0U(k,h)Γ(k+1)Γ(k+1γ)xkγth.

    Theorem 4.2. Suppose u(x,t)=k=0h=0U(k,h)xkth and v(x,t)=RxDγlu(x,t), 1<γ<2, is the right Riemann-Liouville derivative, then

    v(x,t)=k=0h=0i=k(ik)(1)k(l)ikU(i,h)Γ(k+1)Γ(k+1γ)(lx)kγth

    Proof. By replacing u(x,k)=k=0h=0U(k,h)xkth in the right Riemann-Liouville derivative definition it is easy to obtain

    v(x,t)=1Γ(2γ)2x2lxk=0h=0U(k,h)ξkth(ξx)γ1dξ=1Γ(2γ)k=0h=0U(k,h)th2x2lxξk(ξx)γ1dξ

    With replacing ξx=t and then lx=y we have

    v(x,t)=1Γ(2γ)k=0h=0U(k,h)th2x2lx0(t+x)ktγ1dt=1Γ(2γ)k=0h=0U(k,h)th2x2(L1(L(y0(l(yt))ktγ1dt)))=1Γ(2γ)k=0h=0U(k,h)th2x2(L1(L((ly)ky1γ)),=1Γ(2γ)k=0h=0U(k,h)th2x2(L1(L(ki=0(ki)(1)ilkiyiy1γ)))=1Γ(2γ)k=0h=0U(k,h)th2x2(L1(ki=0(ki)(1)ilkiL(yiy1γ)))=1Γ(2γ)k=0h=0U(k,h)th2x2(L1(ki=0(ki)(1)ilki(Γ(i+1)si+1Γ(2γ)s2γ)))=k=0h=0U(k,h)th2x2(ki=0(ki)(1)ilki(Γ(i+1)Γ(i+3γ)(lx)i+2γ))=k=0h=0ki=0(ki)(1)ilkiU(k,h)Γ(i+1)Γ(i+1γ)(lx)iγth=k=0h=0i=k(ik)(1)klikU(i,h)Γ(k+1)Γ(k+1γ)(lx)kγth.

    In both theorems 4.1 and 4.2 L is Laplace transform operator which is defined as L(f)=0f(t)estdt and (fg) represents the convolution of f and g which is defined as (fg)=t0f(tτ)g(τ)dτ.

    As it is observed in calculation of left and right Riemann-Liouville derivatives for function u(x,t), coefficients of variables xkγth and (lx)kγth are appeared, respectively. Therefore we expect to see functions of xkγth and (lx)kγth in equations which involves Riesz space fractional derivatives. In DTM, it is traditional to expand the function u(x,t) with respect to integer order of x and t, i.e.

    u(x,t)=k=0h=01k!h![k+hxkthu(x,t)](x0,t0)(xx0)k(tt0)h=k=0h=0U(k,h)(xx0)k(tt0)h

    which means that u(x,t) has Taylor series expansion. However xkγ and (lx)kγ are not smooth enough on [0,l], in this case we define different notation for differential transformation.

    Definition 4.3. we use ¯Uγ,0(k,h) as a notation for differential transform of function u(x,t) where u(x,t) has the series representation in the form of u(x,t)=k=0h=0¯Uγ,0(k,h)(xx0)kγ(tt0)h.

    Definition 4.4. we use ¯¯Uγ,0(k,h) as a notation for differential transform of function u(x,t) where u(x,t) has the series representation in the form of u(x,t)=k=0h=0¯¯Uγ,0(k,h)(l(xx0))kγ(tt0)h.

    Theorem 4.5. If v(x,t)=R0Dγxu(x,t)1<γ<2, then ¯Vγ,0(k,h)=Γ(k+1)Γ(k+1γ)U(k,h) The proof follows directly from theorem 4.1 and definition 4.3 at (x0,t0)=(0,0).

    Theorem 4.6. If v(x,t)=RxDγlu(x,t),1<γ<2, then

    ¯¯Vγ,0(k,h)=i=k(1)klikΓ(k+1)Γ(k+1γ)U(i,h).

    The proof follows directly from theorem 4.2 and definition 4.4 at (x0,t0)=(0,0).

    These definitions and theorems assist to describe the implementation of the method. For this aim recall Eq (1.1), since u(x,t) is supposed to be smooth enough, we primarily focus on special separation of f(x,t) in Eq (1.1) into three functions f1,f2,f3 as follows

    f(x,t)=f1(x,t)+f2(x,t)+f3(x,t),

    where f1 is a function which contains integer order of x and fractional order of t with the series representation as f1(x,t)=k=0h=0F1(k,h)(xx0)k(tt0)βh and f2 is a function which contains xkγ and t which has the series representation as f2(x,t)=k=0h=0ˉF2(k,h)(xx0)kγ(tt0)h and f3 is a function involves (lx)kγ and t with the series representation as f3(x,t)=k=0h=0ˉˉF3(k,h)(l(xx0))kγ(tt0)h, then we take the following three steps

    Step1. The left parts of Eq (1.1) which do not have Riesz space derivatives are equivalent with the inhomogeneous part, which have integer order of x and fractional order of t. i.e:

    (C0D2βt+2kC0Dβt+v2)u(x,t)=f1(x,t). (4.1)

    Now we apply GDTM as follows

    Γ(β(h+2)+1)Γ(βh+1)U(k,h+2)+2kΓ(β(h+1)+1)Γ(βh+1)U(k,h+1)+v2U(k,h)=F1(k,h), (4.2)

    where F1(k,h) is the differential transform of f1(x,t).

    Step2. As mentioned in remark 2.3 Riesz space derivative contains left and right Riemann-Liouville derivatives, i.e. γ|x|γu(x,t)=ξγ(R0Dγx+RxDγL)u(x,t), for the next step we set

    ξγR0Dγxu(x,t)=f2(x,t), (4.3)

    then with using differential transform method according to theorem 4.5 we get

    ξγ¯Uγ,0(k,h)=¯F2(k,h). (4.4)

    Step3. For the last step we set

    ξγ RxDγlu(x,t)=f3(x,t). (4.5)

    Then with applying differential transform by means of new notation according to the theorem 4.6 we obtain,

    ξγ¯¯Uγ,0(k,h)=¯¯F3(k,h). (4.6)

    The differential transform of initial and boundary conditions are

    {U(k,0)=Φ(k),U(k,1)=Ψ(k),k=0,1,...U(0,h)=0,k=0U(k,h)=0,h=0,1,... (4.7)

    where Φ(k) and Ψ(k) are differential transforms of ϕ(k) and ψ(k), respectively. The recurrence relations (4.2), (4.4), and (4.6) are compatible and we calculate U(k,h),k,h=0,1,2,..., with using of each one and (4.7). Then with the inverse transformation we find the approximate solution of u(x,t) as follows

    u(x,t)uN(x,t)=Nk=0Nh=0U(k,h)xktβh.

    In this section we find an error bound for two dimensional GDTM. As we know DTM is obtained from GDTM when the fractional powers reduce to integer order, therefore one can find an error bound for two dimensional DTM at the same manner.

    As mentioned before we assume that u(x,t) is analytic and separable function which has the series representation in the form of u(x,t)=k=0h=0U(k,h)(xx0)αk(tt0)βh, then we have the following theorems.

    Theorem 5.1. Suppose v(x)=k=0ak(xx0)αk and let ϕk(x)=ak(xx0)αk, then the series solution k=0ϕk(x) converges if 0<γ1<1 Such that , where \| \phi_{k}(x)\| = \max_{x \in I} | \phi_k(x) | and I = (x_0, x_0+r) , r > 0 .

    Remark 5.2. The series solution v(x) = \sum\nolimits_{k = 0}^{\infty}a_k(x-x_0)^{\alpha k} converges if \lim_{k \to \infty} \left| \dfrac{a_{k+1}}{a_k} \right| < 1/ \max_{x \in I}(x-x_0)^ \alpha, I = (x_0, x_0+r), \, \, r > 0 , in this case the function v(x) is called \alpha -analytic function at x_0 .

    Remark 5.3. Suppose S_n = \phi_0(x)+\phi_1(x)+ \cdots + \phi_n(x) , where \phi_k(x) = a_k(x-x_0)^{\alpha k} then for every n, m \in N, \, \, n \ge m > k_0 we have \| S_n-S_m\| \le \dfrac{1-\gamma_1^{n-m}}{1-\gamma_1}\gamma_1^{m-k_0+1} \max_{x \in I}| \phi_{k_0}(x)|. Notice that when n \to \infty then

    \|v(x)-S_m\| \le \dfrac{1}{1-\gamma_1}\gamma_1^{m-k_0+1} \max\limits_{x \in I}| \phi_{k_0}(x)|.

    The proof of theorem 5.1 and remarks 5.2 and 5.3 are given in literature [26]. Now suppose that v(x) and w(t) are \alpha -analytic and \beta -analytic functions respectively such that v(x) = \sum\nolimits_{k = 0}^{\infty} a_k(x-x_0)^{\alpha k} and w(t) = \sum_{h = 0}^{\infty} b_k(t-t_0)^{\beta h} , then the truncated error of u(x, t) = v(x)w(t) is found according to the next theorem.

    Theorem 5.4. Assume that the series solution v(x) = \sum_{k = 0}^\infty \phi_k(x) with \phi_k(x) = a_k(x-x_0)^{\alpha k} and w(x) = \sum_{h = 0}^\infty \psi_h(x) where \psi_h(x) = a_h(t-t_0)^{\beta h} converges, then the maximum absolute truncated error of function u(x, t) = v(x)w(t) is estimated by

    \begin{array}{l} \| u(x, t)- \sum\limits_{k = 0}^m \sum\limits_{h = 0}^q \phi_k(x) \psi_h(t) \| \le M \dfrac{1}{1- \gamma_2} \gamma_2^{q-q_0+1} \max\limits_{x \in J} |b_{q_0}(t-t_0)^{\beta q_0}|\\ +N \dfrac{1}{1- \gamma_1} \gamma_1^{m-m_0+1} \max\limits_{x \in I} |a_{m_0}(x-x_0)^{\alpha m_0}|\\ \end{array}

    where M = \max_{x \in I}|v(x)|, \, N = \max_{t \in J}|w(t)| , I = (x_0, x_0+r) , J = (t_0, t_0+l) , \gamma_1 and \gamma_2 are determined by theorem 5.1.

    Proof. from remark 5.3 for n \ge m \ge m_0 for any m_0 \ge 0 \|S_n-S_m\| \le \dfrac{1-\gamma_1^{n-m}}{1-\gamma_1}\gamma_1^{m-m_0+1} \max_{x \in I}| a_{m_0}(x-x_0)^{\alpha m_0}| where S_n = \phi_0(x)+\phi_1(x)+ \cdots + \phi_n(x) , and a_{m_0} \ne 0 ,

    as well we conclude that if T_p = \psi_0(t)+\psi_1(t)+ \cdots + \psi_p(t) , for p \ge q \ge q_0 for any q_0 \ge 0 then

    \|T_p-T_q\| \le \dfrac{1-\gamma_2^{p-q}}{1-\gamma_2}\gamma_2^{q-q_0+1} \max\limits_{x \in J} | b_{q_0}(t-t_0)^{\beta q_0}|, \, \, J = (t_0, t_0+l), \, \, l \gt 0,

    It is clear that, when n \to \infty then S_n \to v(x) , and when p \to \infty then T_p \to w(t) , now

    \begin{array}{l} & \| u(x, t)- \sum\limits_{k = 0}^{m} \sum\limits_{h = 0}^q \phi_k(x)\psi_h(t) \|\\ & = \| v(x)w(t)- \sum\limits_{k = 0}^m \phi_k(x) \sum\limits_{h = 0}^q \psi_h(t)\|\\ & = \| v(x) w(t)- \sum\limits_{k = 0}^m \phi_k(x) \sum\limits_{h = 0}^q \psi_h(t)-v(x) \sum\limits_{h = 0}^q \psi_h(t)+v(x) \sum\limits_{h = 0}^q \psi_h(t) \|\\ & \le \|v(x)\| \| w(t)- \sum\limits_{h = 0}^q \psi_h(t)\|+ \| \sum\limits_{h = 0}^q \psi_h(t)\| \| v(t)- \sum\limits_{k = 0}^m \phi_k(x)\| \\ & \le M\dfrac{1- \gamma_2^{p-q}}{1- \gamma_2}\gamma_2^{q-q_0+1} \max\limits_{x \in J} | b_{q_0}(t-t_0)^{\beta q_0}| +N \dfrac{1- \gamma_1^{n-m}}{1- \gamma_1}\gamma_1^{m-m_0+1} \max\limits_{x \in I} | a_{m_0}(x-x_0)^{\alpha m_0}|.\\ \end{array}

    Since 0 < \gamma_1, \gamma_2 < 1 , we have (1-\gamma_1^{n-m}) < 1 and (1-\gamma_2^{p-q}) < 1 , and the last inequality reduces to

    \begin{array}{l} & \| u(x, t)- \sum\limits_{k = 0}^{m} \sum\limits_{h = 0}^q \phi_k(x) \psi_h(t) \|\\ & \le M \dfrac{1}{1- \gamma_2}\gamma_2^{q-q_0+1} \max\limits_{x \in J} | b_{q_0}(t-t_0)^{\beta q_0}| +N \dfrac{1}{1- \gamma_1}\gamma_1^{m-m_0+1} \max\limits_{x \in I} | a_{m_0}(x-x_0)^{\alpha m_0}|\\ \end{array}

    It is clear that when m \rightarrow \infty and q \rightarrow \infty the right hand side of last inequality tends to zero, therefore \sum_{k = 0}^m \sum_{h = 0}^q \phi_k(x) \psi_h(t) \rightarrow u(x, t) .

    We refer the readers to Anastassiou et al. [29] to see the monotone convergence of extended iterative methods.

    In this section, some examples are illustrated to verify the applicability and convenience of the mentioned scheme. It is notable that if the partial derivative in equation is integer order, DTM is used and in the case that the equation has fractional order derivatives then GDTM is used. The examples are presented in both cases.

    Example 6.1. Consider the following Riesz -space fractional telegraph equation with constant coefficients [27]

    \begin{equation} \dfrac{\partial^2 u(x, t)}{\partial t^2}+ 20 \dfrac{\partial u(x, t)}{\partial t}+25 u(x, t)- \dfrac{\partial^\gamma u(x, t)}{\partial |x|^\gamma} = f(x, t), \end{equation} (6.1)

    where the initial and boundary conditions are

    \begin{array}{l} & u(0, t) = u(1, t) = 0, \, \, \, \, 0 \le t \le T, \\ & u(x, 0) = 0, \, \, \dfrac{\partial u(x, 0)}{\partial t} = x^2(1-x)^2, \, \, \, 0 \le x \le 1, \end{array}

    and the inhomogeneous term is

    \begin{array}{l} f(x, y) = &x^2(1-x)^2 [24 \sin t+20 \cos t]\\ & +\dfrac{\sin t}{2 \cos \frac{\gamma \pi}{2}} \bigg\{ \dfrac{\Gamma(5)}{\Gamma(5- \gamma)}(x^{4-\gamma}+(1-x)^{4-\gamma})\\ &- 2 \dfrac{\Gamma(4)}{\Gamma(4-\gamma)}(x^{3-\gamma}+(1-x)^{3-\gamma})+ \dfrac{\Gamma(3)}{\Gamma(3-\gamma)}(x^{2-\gamma}+(1-x)^{2-\gamma}) \bigg\} \end{array}

    Under these assumptions, the exact solution of Eq (6.1) is u(x, t) = x^2(1-x)^2 \sin t .

    For using mentioned algorithm, first we separate f(x, y) in the form of

    \begin{align} &f_1(x, y) = x^2(1-x)^2 [24 \sin t+20 \cos t], \\ & f_2(x, t) = \dfrac{\sin t}{2 \cos \frac{\gamma \pi}{2}} \bigg( \dfrac{\Gamma(5)}{\Gamma(5-\gamma)}x^{4-\gamma}-2 \dfrac{\Gamma(4)}{\Gamma(4-\gamma)}x^{3-\gamma}+ \dfrac{\Gamma(3)}{\Gamma(3-\gamma)}x^{2-\gamma} \bigg), \\ & f_3(x, t) = \dfrac{\sin t}{2 \cos \frac{\gamma \pi}{2}} \bigg( \dfrac{\Gamma(5)}{\Gamma(5-\gamma)}(1-x)^{4-\gamma}-2 \dfrac{\Gamma(4)}{\Gamma(4-\gamma)}(1-x)^{3-\gamma}+ \dfrac{\Gamma(3)}{\Gamma(3-\gamma)}(1-x)^{2-\gamma} \bigg), \end{align} (6.2)

    where f(x, t) = f_1 (x, t)+f_2 (x, t)+f_3 (x, t) , with this separation we set

    \begin{equation} \begin{cases} \dfrac{\partial^2 u(x, t)}{\partial t^2}+20 \dfrac{\partial u(x, t)}{\partial t}+25 u(x, t) = x^2(1-x)^2[24 \sin t +20 \cos t], \\ \dfrac{ 1}{2 \cos \frac{\gamma \pi}{2}}\; _0^R D_x^{\gamma}u(x, t) = \dfrac{\sin t}{2 \cos \frac{\gamma \pi}{2}} \bigg( \dfrac{\Gamma(5)}{\Gamma(5-\gamma)}x^{4-\gamma}-2 \dfrac{\Gamma(4)}{\Gamma(4-\gamma)}x^{3-\gamma}+ \dfrac{\Gamma(3)}{\Gamma(3-\gamma)}x^{2-\gamma} \bigg), \\ \dfrac{ 1}{2 \cos \frac{\gamma \pi}{2}}\; _x^R D_1^{\gamma}u(x, t) = \dfrac{\sin t}{2 \cos \frac{\gamma \pi}{2}} \bigg( \dfrac{\Gamma(5)}{\Gamma(5-\gamma)}(1-x)^{4-\gamma}\\ \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; -2 \dfrac{\Gamma(4)}{\Gamma(4-\gamma)}(1-x)^{3-\gamma}+ \dfrac{\Gamma(3)}{\Gamma(3-\gamma)}(1-x)^{2-\gamma} \bigg). \end{cases} \end{equation} (6.3)

    With applying differential transform method and using theorems (4.5) and (4.6) to system (6.3) we get

    \begin{equation} \begin{cases} (h+2)(h+1)U(k, h+2)+20(h+1)U(k, h+1)+25U(k, h)\\ \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; = (\delta(k-2)-2 \delta(k-3)+\delta(k-4))(24 S(h)+20C(h)), \\ \dfrac{\Gamma(k+1)}{\Gamma(k+1-\gamma)}U(k, h) \\ \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; = S(h) \bigg( \dfrac{\Gamma(5)}{\Gamma(5-\gamma)}\delta(k-4)-2 \dfrac{\Gamma(4)}{\Gamma(4- \gamma)}\delta(k-3)+ \dfrac{\Gamma(3)}{\Gamma(3- \gamma)}\delta(k-2)\bigg), \\ \sum\limits_{i = k}^\infty \binom{i}{k}U(i, h) \dfrac{\Gamma(k+1)}{\Gamma(k+1-\gamma)}U(k, h) \\ \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; = S(h) \bigg( \dfrac{\Gamma(5)}{\Gamma(5-\gamma)}\delta(k-4)- 2 \dfrac{\Gamma(4)}{\Gamma(4- \gamma)}\delta(k-3)+ \dfrac{\Gamma(3)}{\Gamma(3- \gamma)}\delta(k-2)\bigg).\\ \end{cases} \end{equation} (6.4)

    Where S(h) and C(h) indicate differential transforms of sint and cost, respectively which are obtained as

    \begin{equation} S(h) = \begin{cases} 0 \quad h \, is \, even\\ \dfrac{(-1)^{\frac{h-1}{2}}}{h!} \quad h \, is \, odd \end{cases} \quad C(h) = \begin{cases} 0 \quad h \, is \, odd\\ \dfrac{(-1)^{\frac{h}{2}}}{h!} \quad h \, is \, even \end{cases} \end{equation} (6.5)

    The differential transform of initial and boundary conditions are

    \begin{equation} \begin{cases} U(k, 0) = 0 \, and \, U(k, 1) = \delta(k-2)-2\delta(k-3)+\delta(k-4) \, \, \, k = 0, 1, 2, ...\\ U(0, h) = 0 \, and \, \sum\nolimits_{k = 0}^{\infty} U(k, h) = 0 , \, \, h = 0, 1, ... \end{cases} \end{equation} (6.6)

    From (6.6) it is easy to get U(0, 1) = U(1, 1) = 0 and U(2, 1) = 1, \; \; U(3, 1) = -2, \; \; U(4, 1) = 1 and U(k, 1) = 0 for k\ge 5 .

    From (6.4) with simple calculation we obtain

    U(k, 2) = 0 for k = 0, 1, 2, ...

    \begin{array}{l} & U(2, 3) = \dfrac{-1}{3!}, \; \; U(3, 3) = \dfrac{2}{3!}, \; \; U(4, 3) = \dfrac{-1}{3!} \, \, and \, \, U(k, 3) = 0 \, for \, k = 0, 1 \, \, and \; k = 5, 6, ..., \\ & U(k, 4) = 0 \, \, for \, \, k = 0, 1, 2, ...\\ & U(2, 5) = \dfrac{-1}{3!}, \; \; U(3, 5) = \dfrac{2}{3!}, \; \; U(4, 5) = \dfrac{-1}{3!} \, \, and \, \, U(k, 5) = 0 \, for \, k = 0, 1 \, \, and\; k = 5, 6, ..., \\ & U(k, 6) = 0 \, \, for \, \, k = 0, 1, 2, ...\\ & U(2, 7) = \dfrac{-1}{3!}, \; \; U(3, 7) = \dfrac{2}{3!}, \; \; U(4, 7) = \dfrac{-1}{3!} \, \, and \, \, U(k, 7) = 0 \, for \, k = 0, 1 \, \, and\; k = 5, 6, ..., \\ & \vdots \end{array}

    With using inverse differential transform method we have

    \begin{array}{l} u(x, t)& = \sum\limits_{k = 0}^{\infty} \sum\limits_{h = 0}^{\infty} U(k, h)x^k t^h\\ & = t(x^2-2x^3+x^4)- \dfrac{t^3}{3!}(x^2-2x^3+x^4)+ \dfrac{t^5}{5!}(x^2-2x^3+x^4)\\ & -\dfrac{t^7}{7!}(x^2-2x^3+x^4)+ \cdots\\ & = (x^2-2x^3+x^4)(t- \dfrac{t^3}{3!}+ \dfrac{t^5}{5!}-\dfrac{t^7}{7!}+ \cdots ) = x^2(1-x)^2 \sin t. \end{array}

    Which is the exact solution.

    Chen et al. [27] proposed a class of unconditionally stable difference scheme (FD) based on the Pade approximation to solve the problem (6.1) at t = 2.0 with several choices for h and \tau , where h and \tau are the space and time step sizes, respectively. In Table 1, we compare \| u-u_{N} \|_2 at t = 2.0 with different choice for N . The results show that our method is more accurate than the three schemes which were proposed on their work. The main advantage of this method is lower computational work than Chen et al. [27], where with \tau = 10^{-4} as a time step length they need to evaluate 2000 iterations to reach t = 2 .

    Table 1.  Comparison of \| u-u_N \|_2 for different values of h and \tau for Example 6.1 at t = 2.0.
    N Improved GDTM method FD schemes [26]( \tau =10^{-4} )
    h SchemeⅠ SchemeⅡ SchemeⅢ
    5 3.5194e-4 0.25 8.5871-4 1.0948e-3 1.7573e-3
    10 5.8880e-7 0.125 1.9827e-4 2.5363e-4 4.2490e-4
    15 3.4711e-12 0.0625 5.0593e-5 6.1691e-5 1.0145e-4
    20 3.3912e-16 0.03125 1.4007e-5 1.6003e-5 2.4455e-5

     | Show Table
    DownLoad: CSV

    Example 6.2. Consider the following partial differential equation with Riesz space fractional derivative [28]

    \begin{equation} \dfrac{\partial u(x, t)}{\partial t}+ u(x, t)+ \dfrac{\partial^\gamma u(x, t)}{\partial |x|^\gamma} = f(x, t), \; \; 0 \le x \le 1, \, \, 0 \le t \le 1, \end{equation} (6.7)

    with initial condition

    u(x, 0) = x^2(1-x)^2\, \, \, \, \, \, \, \, 0 \le x \le 1,

    and boundary conditions

    u(0, t) = u(1, t) = 0, \, \, \, 0 \le t \le 1,

    where the forced term is

    \begin{array}{l} f(x, y) = & \dfrac{(t+1)^4}{\cos \frac{\gamma \pi}{2}} \bigg\{ \dfrac{12(x^{4-\gamma}+(1-x)^{4-\gamma})}{\Gamma(5-\gamma)}- \dfrac{6(x^{3-\gamma}+(1-x)^{3-\gamma})}{\Gamma(4-\gamma)}\\&\; \; \; \; +\dfrac{(x^{2-\gamma}+(1-x)^{2-\gamma})}{\Gamma(3-\gamma)}\bigg\}+(5+t)(t+1)^3x^2(1-x)^2. \end{array}

    The exact solution of Eq (6.7) is u(x, t) = x^2(1-x)^2(t+1)^4.

    For using the method, we separate the force term as

    \begin{align} &f_1(x, y) = x^2(1-x)^2 (5+t)(t+1)^3, \\ & f_2(x, t) = \dfrac{(t+1)^4}{ \cos \frac{\gamma \pi}{2}} \bigg( \dfrac{12x^{4-\gamma}}{\Gamma(5-\gamma)}- \dfrac{6x^{3-\gamma}}{\Gamma(4-\gamma)}+ \dfrac{x^{2-\gamma}}{\Gamma(3-\gamma)} \bigg), \\ & f_3(x, t) = \dfrac{(t+1)^4}{ \cos \frac{\gamma \pi}{2}} \bigg( \dfrac{12(1-x)^{4-\gamma}}{\Gamma(5-\gamma)}- \dfrac{6(1-x)^{3-\gamma}}{\Gamma(4-\gamma)}+ \dfrac{(1-x)^{2-\gamma}}{\Gamma(3-\gamma)} \bigg). \end{align} (6.8)

    Now we set

    \begin{equation} \begin{cases} \dfrac{\partial u(x, t)}{\partial t}+ u(x, t) = x^2(1-x)^2 (5+t)(t+1)^3, \\ -\dfrac{ 1}{2 \cos \frac{\gamma \pi}{2}}(_0^R D_x^{\gamma}u(x, t)) = \dfrac{(t+1)^4}{ \cos \frac{\gamma \pi}{2}} \bigg( \dfrac{12x^{4-\gamma}}{\Gamma(5-\gamma)}- \dfrac{6x^{3-\gamma}}{\Gamma(4-\gamma)}+ \dfrac{x^{2-\gamma}}{\Gamma(3-\gamma)} \bigg), \\ -\dfrac{ 1}{2 \cos \frac{\gamma \pi}{2}}(_x^R D_1^{\gamma}u(x, t)) = \dfrac{(t+1)^4}{ \cos \frac{\gamma \pi}{2}} \bigg( \dfrac{12(1-x)^{4-\gamma}}{\Gamma(5-\gamma)}- \dfrac{6(1-x)^{3-\gamma}}{\Gamma(4-\gamma)}+ \dfrac{(1-x)^{2-\gamma}}{\Gamma(3-\gamma)} \bigg). \end{cases} \end{equation} (6.9)

    With using differential transform method for system (6.9) and applying theorems 4.5 and 4.6 we get

    \begin{equation} \begin{cases} (h+1)U(k, h+1)+U(k, h) = (\delta(k-2)-2 \delta(k-3)+\delta(k-4))( \delta(h-4)\\\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; +8 \delta(h-3)+18 \delta(h-2)+16 \delta(h-1)+5 \delta(h)), \\ -\dfrac{\Gamma(k+1)}{\Gamma(k+1-\gamma)}U(k, h) \\ = \big(( \delta (h)+4\delta(h-1)+6 \delta(h-2)+ 4 \delta(h-3)+ \delta(h-4)\big) \\ \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \times \bigg(\dfrac{12}{\Gamma(5-\gamma)}\delta(k-4) - \dfrac{6}{\Gamma(4- \gamma)}\delta(k-3)+ \dfrac{1}{\Gamma(3- \gamma)}\delta(k-2) \bigg), \\ - \sum_{i = k}^{\infty}(-1)^k \binom{i}{k} U(i, h) \dfrac{\Gamma(k+1)}{\Gamma(k+1-\gamma)}\\ \; \; \; \; \; \; \; \; \; \; = \big (\delta(h)+4 \delta(h-1)+6 \delta(h-2)+4 \delta(h-3)+4 \delta(h-4)\big )\\\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \times \bigg( \dfrac{12}{\Gamma(5-\gamma)}\delta(k-4)-2 \dfrac{6}{\Gamma(4- \gamma)}\delta(k-4)+ \dfrac{1}{\Gamma(3- \gamma)}\delta(k-4) \bigg). \end{cases} \end{equation} (6.10)

    Differential transform of initial and boundary conditions are

    \begin{equation} \begin{cases} U(k, 0) = \delta(k-2)-2 \delta(k-3)+\delta(k-4), \, \, \, k = 0, 1, 2, ...\\ U(0, h) = 0 \, \, and \, \, \sum_{k = 0}^\infty U(k, h) = 0, \; \;h = 0, 1, 2, .... \end{cases} \end{equation} (6.11)

    From (6.11) we have

    \begin{array}{l} &U(2, 0) = 1, \; \;U(3, 0) = -2, \; \;U(4, 0) = 1 \, and \, U(k, 0) = 0 \, \, for \, k = 0, 1 \, \, and \, k = 5, 6, ....\\ &U(0, h) = 0, \, \, \, h = 0, 1, 2, ... \end{array}

    From (6.10) we obtain

    \begin{array}{l} & U(2, 1) = 4, \; \; U(3, 1) = -8, \; \; U(4, 1) = 4 \, \, and \, \, U(k, 1) = 0 \, \, for \, \, k = 0, 1, 5, 6, ...\\ & U(2, 2) = 6, \; \;U(3, 2) = -12, \; \; U(4, 2) = 6 \, \, and \, \, U(k, 2) = 0 \, \, for \, \, k = 0, 1, 5, 6, ...\\ & U(2, 3) = 4, \; \;U(3, 3) = -8, \; \; U(4, 3) = 4 \, \, and \, \, U(k, 3) = 0 \, \, for \, \, k = 0, 1, 5, 6, ...\\ & U(2, 4) = 1, \; \;U(3, 4) = -2, \; \; U(4, 4) = 1 \, \, and \, \, U(k, 4) = 0 \, \, for \, \, k = 0, 1, 5, 6, ... \end{array}

    and U(k, h) = 0 for h\ge 5 , and k = 0, 1, 2, ... .

    Therefore with using inverse differential transform we have

    \begin{array}{l} u(x, t)& = \sum\limits_{k = 0}^{\infty} \sum\limits_{h = 0}^{\infty} U(k, h)x^k t^h = (x^2-2x^3+x^4)+4t((x^2-2x^3+x^4)\\ &+6t^2(x^2-2x^3+x^4)+4t^3(x^2-2x^3+x^4)+t^4(x^2-2x^3+x^4)\\ & = (x^2-2x^3+x^4)(1+4t+6t^2+4t^3+t^4)\\ & = x^2(1-x)^2(t+1)^4. \end{array}

    Zhang et al. [28] achieved numerical results based on difference scheme for Eq (6.7) with different values for \gamma . However with our proposed improved GDTM, an analytical solution for Eq (6.7) is found.

    Example 6.3. Consider the following fractional telegraph equation with Riesz space fractional derivative [7]

    \begin{align} \begin{split} _0^CD_t^{2 \beta}u(x, t)+ \, _0^C D_t^\beta u(x, t)-( _0^{R}D_x^\gamma+ \, _x^{R}D_l^\gamma )u(x, t) = f(x, t), \\ \, \, \, 1/2 \lt \beta \lt 1, \, \, 0 \le x \le 1, \, \, 0 \le t\le 1, \end{split} \end{align} (6.12)

    with initial condition

    u(x, 0) = x^2(1-x)^2, \; \; \dfrac{\partial u(x, 0)}{\partial t} = -4x^2(1-x)^2 \, \, \, \, 0 \le x \le 1,

    and boundary conditions

    u(0, t) = u(1, t) = 0, \, \, \, \, 0\le t \le1,

    where the inhomogeneous term is

    f(x, t) = \bigg( \dfrac{8t^{2-2\beta}}{\Gamma(3-2 \beta)}+ \dfrac{8t^{2-\beta}}{\Gamma(3- \beta)}- \dfrac{4t^{1-\beta}}{\Gamma(2- \beta)} \bigg) x^2(1-x)^2\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;
    + (2t-1)^2 \bigg( \dfrac{2(x^{2-\gamma}+(1-x)^{2-\gamma})}{\Gamma(3- \gamma)}- \dfrac{12(x^{3-\gamma}+(1-x)^{3-\gamma})}{\Gamma(4- \gamma)}+ \dfrac{24(x^{4-\gamma}+(1-x)^{4-\gamma})}{\Gamma(5- \gamma)} \bigg).

    The exact solution of Eq (6.12) is u(x, t) = (4t^2-4t+1)x^2(1-x)^2.

    For using mentioned algorithm, we set

    \begin{equation} \begin{cases} _0^CD_t^{2 \beta}u(x, t)+ \, _0^C D_t^\beta u(x, t) = \bigg( \dfrac{8t^{2-2\beta}}{\Gamma(3-2 \beta)}+ \dfrac{8t^{2-\beta}}{\Gamma(3- \beta)}- \dfrac{4t^{1-\beta}}{\Gamma(2- \beta)}\bigg)x^2(1-x)^2, \\ - _0^{R}D_x^{\gamma}u(x, t) = (2t-1)^2 \bigg( \dfrac{2x^{2-\gamma}}{\Gamma(3-\gamma)}- \dfrac{12x^{3-\gamma}}{\Gamma(4-\gamma)}+\dfrac{24x^{4-\gamma}}{\Gamma(5-\gamma)} \bigg), \\ - _x^{R}D_1^{\gamma}u(x, t) = (2t-1)^2 \bigg( \dfrac{2(1-x)^{2-\gamma}}{\Gamma(3-\gamma)}- \dfrac{12(1-x)^{3-\gamma}}{\Gamma(4-\gamma)}+\dfrac{24(1-x)^{4-\gamma}}{\Gamma(5-\gamma)} \bigg), \\ \end{cases} \end{equation} (6.13)

    Suppose 2\beta = 1.6 , we choose \alpha = 0.2 and using GDTM and theorems 4.5 and 4.6 for fractional partial differential equation we get

    \begin{equation} \begin{cases} \dfrac{\Gamma(0.2h+2.6)}{\Gamma(0.2h+1)}U(k, h+8)+ \dfrac{\Gamma(0.2h+1.8)}{\Gamma(0.2h+1)}U(k, h+4)\\ \; \; \; \; \; \; \; \; \; \; = \dfrac{8 \delta(h-2)}{\Gamma(3-2 \beta)}+ \dfrac{8 \delta(h-6)}{\Gamma(3- \beta)}-\dfrac{4 \delta(h-1)}{\Gamma(2- \beta)}( \delta(k-2)-2 \delta(k-3)+\delta(k-4)), \\ -\dfrac{\Gamma(k+1)}{\Gamma(k+1-\gamma)}U(k, h) = (4 \delta(h-10)+4\delta(h-5)+\delta(h))\\ \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \times \bigg( \dfrac{2}{\Gamma(3-\gamma)}\delta(k-2)- \dfrac{12}{\Gamma(4-\gamma)} \delta(k-3)+ \dfrac{24}{\Gamma(5-\gamma)}\delta(k-4) \bigg), \\ - \sum_{i = k}^\infty (-1)^k \binom{i}{k}U(i, h) \dfrac{\Gamma(k+1)}{\Gamma(k+1-\gamma)} = (4 \delta(h-10)+4\delta(h-5)+\delta(h)) \\ \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \times \bigg( \dfrac{2\delta(k-2)}{\Gamma(3-\gamma)}-\dfrac{12\delta(k-3)}{\Gamma(4-\gamma)}+\dfrac{24\delta(k-4)}{\Gamma(5-\gamma)} \bigg). \end{cases} \end{equation} (6.14)

    Generalized differential transform for initial and boundary conditions are

    \begin{equation} \begin{cases} U(k, 0) = \delta(k-2)- 2 \delta(k-3)+ \delta(k-4), \, \, \, \, k = 0, 1, 2, ...\\ U(k, 5) = -4 \delta(k-2)+8 \delta(k-3)-4 \delta(k-4), \\ U(0, h) = 0 \, \, and \, \, \sum\nolimits_{k = 0}^\infty U(k, h) = 0, \, \, \, h = 0, 1, 2, ... \end{cases} \end{equation} (6.15)

    Form (6.15) we obtain U(2, 0) = 1, \; U(3, 0) = -2, \; U(4, 0) = 1 and U(k, 0) = 0 for k = 0, 1, 5, 6, ...,

    U(2, 5) = -4, \; U(3, 5) = 8, \; U(4, 5) = -4 and U(k, 5) = 0 for k = 0, 1, 5, 6, ...

    From (6.14) it is easy to find

    U(2, 10) = -4, \; U(3, 10) = 8, \; U(4, 10) = -4 and \; U(k, 10) = 0 for k = 0, 1, 5, 6, ...

    Otherwise U(k, h) = 0.

    With using inverse generalized differential transform we obtain

    \begin{array}{l} u(x, t) = \sum\limits_{k = 0}^\infty \sum\limits_{h = 0}^\infty U(k, h)x^kt^{0.2h}& = (x^2-2x^3+x^4)-4t(x^2-2x^3+x^4)\\+4t^2(x^2-2x^3+x^4)\\ = x^2(1-x)^2(2t-1)^2. \end{array}

    Zhao et al. [7] used fractional difference and finite element methods in spatial direction to obtain numerical solution for Eq (6.12).Contrary, with our method the exact solution is achieved for this equation which demonstrate that this method is effective and reliable for fractional telegraph equation with Riesz space-fractional derivative.

    Example 6.4. For the last example consider the following Riesz space fractional telegraph equation

    \begin{equation} \dfrac{\partial^2 u(x, t)}{\partial t^2}+4 \dfrac{\partial u(x, t)}{\partial t}+4u(x, t)- \dfrac{\partial^\gamma u(x, t)}{\partial |x|^\gamma} = f(x, y), \end{equation} (6.16)

    where the initial and boundary conditions are

    \begin{array}{l} &u(0, t) = u(1, t) = 0 , \, \, \, \, 0 \le t \le T, \\ &u(x, 0) = 0, \; \, \, \dfrac{\partial u(x, 0)}{\partial t} = x^2(1-x)^2e^x, \, \, \, 0 \le x \le 1, \end{array}

    and the inhomogeneous term is

    \begin{array}{l} &f(x, y) = x^2(1-x)^2e^x [3 \sin t+ 4 \cos t]\\ & +\dfrac{\sin t}{2 \cos \frac{\gamma \pi}{2}} \sum\limits_{n = 0}^\infty \dfrac{1}{n!} \{ \dfrac{\Gamma(5)}{\Gamma(5-\gamma)}(x^{n+4-\gamma}+(1-x)^{n+4-\gamma}\\ & +2 \dfrac{\Gamma(4)}{\Gamma(4-\gamma)}(x^{n+3-\gamma}+(1-x)^{n+3-\gamma})+\dfrac{\Gamma(3)}{\Gamma(3-\gamma)}(x^{n+2-\gamma}+(1-x)^{n+2-\gamma})\}. \end{array}

    Under these assumptions, the exact solution of Eq (6.16) is u(x, t) = x^2(1-x)^2e^x \sin t. The approximate solution (with m = q = 10 ) and exact solution of example 6.4 are illustrated in Figure 1. Also the exact solution and approximate solution in cases t = 0.5 and t = 1 are demonstrated in Figure 2, which shows the accuracy of the method.

    Figure 1.  The exact solution (left) and approximate solution (right) for Example 6.4.
    Figure 2.  The graphs of exact solution (solid) and approximate solution (cross) for different values of t ( t = 0.5 left, t = 1 right) for example 6.4.

    Riesz derivative operator appears in some partial differential equations such as telegraph equation, wave equation, diffusion equation, advection-dispersion equation and some other partial differential equations. These types of equations previously were solved by GDTM without considering Riesz derivative operator. In this paper an improved scheme based on GDTM was developed for solution of fractional partial differential equations with Riesz space fractional derivative. For this purpose the main equation was separated into sub-equations, in a manner which GDTM can be applied. With this trend the main equation reduces to system of algebraic recurrence relations which can be solved easily. The acquired results demonstrated that this method required less amount of computational work compared to the other numerical methods; moreover it was efficient and convenient technique. Providing convergent series solution with fast convergence rate was the advantage of the proposed method, which the numerical examples revealed these facts.

    The authors would like to express his gratitude to the anonymous referees for their helpful comments and suggestions, which have greatly improved the paper.

    The authors declare no conflict of interests in this paper.



    [1] H. Mohamed, R. Omar, N. Saeed, A. Essam, N. Ayman, T. Mohiy, et al., Automated detection of white blood cells cancer diseases, in 2018 First International Workshop on Deep and Representation Learning (IWDRL), (2018), 48–54. https://doi.org/10.1109/IWDRL.2018.8358214
    [2] M. S. Kruskall, T. H. Lee, S. F. Assmann, M. Laycock, L. A. Kalish, M. M. Lederman, et al., Survival of transfused donor white blood cells in hiv-infected recipients, Blood J. Am. Soc. Hematol., 98 (2001), 272–279. https://doi.org/10.1182/blood.V98.2.272 doi: 10.1182/blood.V98.2.272
    [3] F. Xing, L. Yang, Robust nucleus/cell detection and segmentation in digital pathology and microscopy images: a comprehensive review, IEEE Rev. Biomed. Eng., 9 (2016), 234–263. https://doi.org/10.1109/RBME.2016.2515127 doi: 10.1109/RBME.2016.2515127
    [4] X. Zheng, Y. Wang, G. Wang, J. Liu, Fast and robust segmentation of white blood cell images by self-supervised learning, Micron, 107 (2018), 55–71. https://doi.org/10.1016/j.micron.2018.01.010 doi: 10.1016/j.micron.2018.01.010
    [5] Z. Zhu, S. H. Wang, Y. D. Zhang, Rernet: A deep learning network for classifying blood cells, Technol. Cancer Res. Treatment, 22 (2023), 15330338231165856. https://doi.org/10.1177/15330338231165856 doi: 10.1177/15330338231165856
    [6] Z. Zhu, Z. Ren, S. Lu, S. Wang, Y. Zhang, Dlbcnet: A deep learning network for classifying blood cells, Big Data Cognit. Comput., 7 (2023), 75. https://doi.org/10.3390/bdcc7020075 doi: 10.3390/bdcc7020075
    [7] C. Cheuque, M. Querales, R. León, R. Salas, R. Torres, An efficient multi-level convolutional neural network approach for white blood cells classification, Diagnostics, 12 (2022), 248. https://doi.org/10.3390/diagnostics12020248 doi: 10.3390/diagnostics12020248
    [8] Y. Zhou, Y. Wu, Z. Wang, B. Wei, M. Lai, J. Shou, et al., Cyclic learning: Bridging image-level labels and nuclei instance segmentation, IEEE Trans. Med. Imaging, 42 (2023), 3104–3116. https://doi.org/10.1109/TMI.2023.3275609 doi: 10.1109/TMI.2023.3275609
    [9] Z. Gao, J. Shi, X. Zhang, Y. Li, H. Zhang, J. Wu, et al., Nuclei grading of clear cell renal cell carcinoma in histopathological image by composite high-resolution network, in International Conference on Medical Image Computing and Computer-Assisted Intervention, (2021), 132–142. https://doi.org/10.1007/978-3-030-87237-3_13
    [10] X. Liu, L. Song, S. Liu, Y. Zhang, A review of deep-learning-based medical image segmentation methods, Sustainability, 13 (2021), 1224. https://doi.org/10.3390/su13031224 doi: 10.3390/su13031224
    [11] O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, in International Conference on Medical Image Computing and Computer-Assisted Intervention, (2015), 234–241. https://doi.org/10.1007/978-3-319-24574-4_28
    [12] F. Falck, C. Williams, D. Danks, G. Deligiannidis, C. Yau, C. Holmes, et al., A multi-resolution framework for U-Nets with applications to hierarchical VAEs, Adv. Neural Inf. Process. Syst., 35 (2022), 15529–15544.
    [13] Z. Zhou, M. Rahman Siddiquee, N. Tajbakhsh, J. Liang, Unet++: A nested u-net architecture for medical image segmentation, in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, (2018), 3–11. https://doi.org/10.1007/978-3-030-00889-5_1
    [14] H. Huang, L. Lin, R. Tong, H. Hu, Q. Zhang, Y. Iwamoto, et al., Unet 3+: A full-scale connected unet for medical image segmentation, in IEEE International Conference on Acoustics, Speech and Signal Processing, (2020), 1055–1059. https://doi.org/10.1109/icassp40776.2020.9053405
    [15] F. Jia, J. Liu, X. Tai, A regularized convolutional neural network for semantic image segmentation, Anal. Appl., 19 (01), 147–165. https://doi.org/10.1142/s0219530519410148
    [16] N. Akram, S. Adnan, M. Asif, S. Imran, M. Yasir, R. Naqvi, et al., Exploiting the multiscale information fusion capabilities for aiding the leukemia diagnosis through white blood cells segmentation, IEEE Access, 10 (2022), 48747–48760. https://doi.org/10.1109/access.2022.3171916 doi: 10.1109/access.2022.3171916
    [17] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, et al., An image is worth 16x16 words: Transformers for image recognition at scale, in International Conference on Learning Representations, 2021.
    [18] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, et al., Swin transformer: Hierarchical vision transformer using shifted windows, in Proceedings of the IEEE International Conference on Computer Vision, (2021), 10012–10022. https://doi.org/10.1109/iccv48922.2021.00986
    [19] C. Nicolas, M. Francisco, G. Synnaeve, N. Usunier, A. Kirillov, S. Zagoruyko, End-to-end object detection with transformers, in European Conference on Computer Vision, (2020), 213–229.
    [20] O. Petit, N. Thome, C. Rambour, L. Themyr, T. Collins, L. Soler, U-net transformer: Self and cross attention for medical image segmentation, in International Workshop on Machine Learning in Medical Imaging, (2021), 267–276. https://doi.org/10.1007/978-3-030-87589-3_28
    [21] J. Valanarasu, P. Oza, I. Hacihaliloglu, V. Patel, Medical transformer: Gated axial-attention for medical image segmentation, in Medical Image Computing and Computer Assisted Intervention, (2021), 36–46. https://doi.org/10.1007/978-3-030-87193-2_4
    [22] H. Cao, Y. Wang, J. Chen, D. Jiang, X. Zhang, Q. Tian, et al., Swin-unet: Unet-like pure transformer for medical image segmentation, in Proceedings of European Conference on Computer Vision Workshops, 3 (2023), 205–218. https://doi.org/10.1007/978-3-031-25066-8_9
    [23] Z. Chi, Z. Wang, M. Yang, D. Li, W. Du, Learning to capture the query distribution for few-shot learning, IEEE Trans. Circuits Syst. Video Technol., 32 (2021), 4163–4173. https://doi.org/10.1109/tcsvt.2021.3125129 doi: 10.1109/tcsvt.2021.3125129
    [24] G. Li, S. Masuda, D. Yamaguchi, M. Nagai, The optimal gnn-pid control system using particle swarm optimization algorithm, Int. J. Innovative Comput. Inf. Control, 5 (2009), 3457–3469. https://doi.org/10.1109/GSIS.2009.5408225 doi: 10.1109/GSIS.2009.5408225
    [25] Y. Wang, K. Yi, X. Liu, Y. Wang, S. Jin, Acmp: Allen-cahn message passing with attractive and repulsive forces for graph neural networks, in International Conference on Learning Representations, 2022.
    [26] S. Min, Z. Gao, J. Peng, L. Wang, K. Qin, B. Fang, Stgsn—a spatial–temporal graph neural network framework for time-evolving social networks, Knowl. Based Syst., 214 (2021), 106746. https://doi.org/10.1016/j.knosys.2021.106746 doi: 10.1016/j.knosys.2021.106746
    [27] B. Bumgardner, F. Tanvir, K. Saifuddin, E. Akbas, Drug-drug interaction prediction: a purely smiles based approach, in IEEE International Conference on Big Data, (2021), 5571–5579. https://doi.org/10.1109/bigdata52589.2021.9671766
    [28] J. Bruna, W. Zaremba, A. Szlam, Y. LeCun, Spectral networks and locally connected networks on graphs, in International Conference on Learning Representations, 2014.
    [29] M. Defferrard, X. Bresson, P. Vandergheynst, Convolutional neural networks on graphs with fast localized spectral filtering, Adv. Neural Inf. Process. Syst., 29 (2016).
    [30] T. Kipf, M. Welling, Semi-supervised classification with graph convolutional networks, International Conference on Learning Representations, 2017.
    [31] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, Y. Bengio, Graph attention networks, in International Conference on Learning Representations, 2018.
    [32] K. Xu, W. Hu, J. Leskovec, S. Jegelka, How powerful are graph neural networks?, preprint, arXiv: 1810.00826.
    [33] Y. Lu, Y. Chen, D. Zhao, J. Chen, Graph-FCN for image semantic segmentation, in International Symposium on Neural Networks, 2019.
    [34] L. Zhang, X. Li, A. Arnab, K. Yang, Y. Tong, P. Torr, et al., Dual graph convolutional network for semantic segmentation, in British Machine Vision Conference, 2019.
    [35] Y. Jiang, Q. Ding, Y. G. Wang, P. Liò, X.Zhang, Vision graph u-net: Geometric learning enhanced encoder for medical image segmentation and restoration, Inverse Prob. Imaging, 2023 (2023). https://doi.org/10.3934/ipi.2023049
    [36] Z. Tian, L. Liu, Z. Zhang, B. Fei, Superpixel-based segmentation for 3d prostate mr images, IEEE Trans. Med. Imaging, 35 (2015), 791–801. https://doi.org/10.1109/tmi.2015.2496296 doi: 10.1109/tmi.2015.2496296
    [37] F. Monti, D. Boscaini, J. Masci, E. Rodola, J. Svoboda, M. Bronstein, Geometric deep learning on graphs and manifolds using mixture model cnns, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2017), 5115–5124. https://doi.org/10.1109/cvpr.2017.576
    [38] R. Gadde, V. Jampani, M. Kiefel, D. Kappler, P. Gehler, Superpixel convolutional networks using bilateral inceptions, in Proceedings of the European Conference on Computer Vision, (2016). https://doi.org/10.1007/978-3-319-46448-0_36
    [39] P. Avelar, A. Tavares, T. Silveira, C. Jung, L. Lamb, Superpixel image classification with graph attention networks, in SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), (2020), 203–209. https://doi.org/10.1109/sibgrapi51738.2020.00035
    [40] W. Zhao, L. Jiao, W. Ma, J. Zhao, J. Zhao, H. Liu, et al, . Superpixel-based multiple local cnn for panchromatic and multispectral image classification, IEEE Trans. Geosci. Remote Sens., 55 (2017), 4141–4156. https://doi.org/10.1109/tgrs.2017.2689018 doi: 10.1109/tgrs.2017.2689018
    [41] B. Cui, X. Xie, X. Ma, G. Ren, Y. Ma, Superpixel-based extended random walker for hyperspectral image classification, IEEE Trans. Geosci. Remote Sens., 56 (2018), 3233–3243. https://doi.org/10.1109/tgrs.2018.2796069 doi: 10.1109/tgrs.2018.2796069
    [42] S. Zhang, S. Li, W. Fu, L. Fang, Multiscale superpixel-based sparse representation for hyperspectral image classification, Remote Sens., 9 (2017), 139. https://doi.org/10.3390/rs9020139 doi: 10.3390/rs9020139
    [43] Q. Liu, L. Xiao, J. Yang, Z. Wei, Cnn-enhanced graph convolutional network with pixel-and superpixel-level feature fusion for hyperspectral image classification, IEEE Trans. Geosci. Remote Sens., 59 (2020), 8657–8671. https://doi.org/10.1109/tgrs.2020.3037361 doi: 10.1109/tgrs.2020.3037361
    [44] P. Felzenszwalb, D. Huttenlocher, Efficient graph-based image segmentation, Int. J. Comput. Vision, 59 (2010), 167–181. https://doi.org/10.1109/icip.2010.5653963 doi: 10.1109/icip.2010.5653963
    [45] X. Ren, J. Malik, Learning a classification model for segmentation, Proceedings of the IEEE International Conference on Computer Vision, 2 (2003), 10–10. https://doi.org/10.1109/iccv.2003.1238308 doi: 10.1109/iccv.2003.1238308
    [46] M. Liu, O. Tuzel, S. Ramalingam, R. Chellappa, Entropy rate superpixel segmentation, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2011), 2097–2104. https://doi.org/10.1109/cvpr.2011.5995323
    [47] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. FuaSü, S. Sstrunk, Slic superpixels compared to state-of-the-art superpixel methods, IEEE Trans. Pattern Anal. Mach. Intell., 34 (11), 2274–2282. https://doi.org/10.1109/tpami.2012.120
    [48] Z. Li, J. Chen, Superpixel segmentation using linear spectral clustering, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2015), 1356–1363. https://doi.org/10.1109/cvpr.2015.7298741
    [49] Y. Liu, C. Yu, M. Yu, Y. He, Manifold slic: a fast method to compute content-sensitive superpixels, in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, (2016), 651–659. https://doi.org/10.1109/cvpr.2016.77
    [50] R. Achanta, S. Susstrunk, Superpixels and polygons using simple non-iterative clustering, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2017), 4651–4660. https://doi.org/10.1109/cvpr.2017.520
    [51] W. Tu, M. Liu, V. Jampani, D. Sun, S. Chien, M. Yang, et al., Learning superpixels with segmentation-aware affinity loss, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), 568–576. https://doi.org/10.1109/cvpr.2018.00066
    [52] V. Jampani, D. Sun, M. Liu, M. Yang, J. Kautz, Superpixel sampling networks, in Proceedings of the European Conference on Computer Vision, (2018), 352–368. https://doi.org/10.1007/978-3-030-01234-2_22
    [53] F. Yang, Q. Sun, H. Jin, Z. Zhou, Superpixel segmentation with fully convolutional networks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2020), 13964–13973. https://doi.org/10.1109/cvpr42600.2020.01398
    [54] T. Suzuki, Superpixel segmentation via convolutional neural networks with regularized information maximization, in IEEE International Conference on Acoustics, Speech and Signal Processing, (2020), 2573–2577. https://doi.org/10.1109/icassp40776.2020.9054140
    [55] L. Zhu, Q. She, B. Zhang, Y. Lu, Z. Lu, D. Li, J. Hu, Learning the superpixel in a non-iterative and lifelong manner, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2021), 1225–1234. https://doi.org/10.1109/cvpr46437.2021.00128
    [56] C. Saueressig, A. Berkley, R. Munbodh, R. Singh, A joint graph and image convolution network for automatic brain tumor segmentation, in International Conference on Medical Image Computing and Computer-Assisted Intervention Workshop, (2021), 356–365. https://doi.org/10.1007/978-3-031-08999-2_30
    [57] V. Kulikov, V. Lempitsky, Instance segmentation of biological images using harmonic embeddings, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2020), 3843–3851. https://doi.org/10.1109/cvpr42600.2020.00390
    [58] J. Kim, T. Kim, S. Kim, C. Yoo, Edge-labeling graph neural network for few-shot learning, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2019), 11–20. https://doi.org/10.1109/cvpr.2019.00010
    [59] T. Chen, S. Kornblith, M. Norouzi, G. Hinton, A simple framework for contrastive learning of visual representations, in International conference on machine learning, (2020), 1597–1607.
    [60] X. Chen, H. Fan, R. Girshick, K. He, Improved baselines with momentum contrastive learning, preprint, arXiv: 2003.04297.
    [61] K. He, H. Fan, Y. Wu, S. Xie, R. Girshick, Momentum contrast for unsupervised visual representation learning, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2020), 9729–9738. https://doi.org/10.1109/cvpr42600.2020.00975
    [62] M. Caron, I. Misra, J. Mairal, P. Goyal, P. Bojanowski, A. Joulin, Unsupervised learning of visual features by contrasting cluster assignments, Adv. Neural Inf. Process. Syst., 33 (2020), 9912–9924.
    [63] W. Wang, T. Zhou, F. Yu, J. Dai, E. KonukogluVan, L. Gool, Exploring cross-image pixel contrast for semantic segmentation, in Proceedings of the IEEE International Conference on Computer Vision, (2021), 7303–7313. https://doi.org/10.1109/iccv48922.2021.00721
    [64] J. Gilmer, S. Schoenholz, P. Riley, O. Vinyals, G. Dahl, Neural message passing for quantum chemistry, in International Conference on Machine Learning, 2017.
    [65] B. Weisfeiler, A. Leman, A reduction of a graph to a canonical form and an algebra arising during this reduction, Nauchno Tech. Inf., 2 (1968), 12–16.
    [66] R. Achanta, S. Susstrunk, Superpixels and polygons using simple non-iterative clustering, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. https://doi.org/10.1109/cvpr.2017.520
    [67] F. Milletari, N. Navab, S. Ahmadi, V-net: Fully convolutional neural networks for volumetric medical image segmentation, in International Conference on 3D Vision, (2016), 565–571. https://doi.org/10.1109/3dv.2016.79
    [68] D. Kingma, J. Ba, Adam: A method for stochastic optimization, in International Conference on Learning Representations, 2015.
    [69] X. Zheng, Y. Wang, G. Wang, Z. Chen, A novel algorithm based on visual saliency attention for localization and segmentation in rapidly-stained leukocyte images, Micron, 56 (2014), 17–28. https://doi.org/10.1016/j.micron.2013.09.006 doi: 10.1016/j.micron.2013.09.006
    [70] L. McInnes, J. Healy, J. Melville, Umap: Uniform manifold approximation and projection for dimension reduction, preprint, arXiv: 1802.03426.
    [71] A. Acevedo, A. Merino, S. Alférez, A. Molina, L. Boldú, J. Rodellar, A dataset of microscopic peripheral blood cell images for development of automatic recognition systems, Data Brief, 30 (2020), 105474. https://doi.org/10.1016/j.dib.2020.105474 doi: 10.1016/j.dib.2020.105474
    [72] P. Yampri, C. Pintavirooj, S. Daochai, S. Teartulakarn, White blood cell classification based on the combination of eigen cell and parametric feature detection, in IEEE Conference on Industrial Electronics and Applications, (2006), 1–4. https://doi.org/10.1109/iciea.2006.257341
    [73] I. Livieris, E. Pintelas, A. Kanavos, P. Pintelas, Identification of blood cell subtypes from images using an improved ssl algorithm, Biomed. J. Sci. Tech. Res., 9 (2018), 6923–6929. https://doi.org/10.26717/bjstr.2018.09.001755 doi: 10.26717/bjstr.2018.09.001755
    [74] R. Banerjee, A. Ghose, A light-weight deep residual network for classification of abnormal heart rhythms on tiny devices, in Joint European Conference on Machine Learning and Knowledge Discovery in Databases, (2022), 317–331. https://doi.org/10.1007/978-3-031-23633-4_22
    [75] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, (2016), 770–778. https://doi.org/10.1109/cvpr.2016.90
  • This article has been cited by:

    1. Firdous A. Shah, Mohd Irfan, Kottakkaran S. Nisar, R.T. Matoog, Emad E. Mahmoud, Fibonacci wavelet method for solving time-fractional telegraph equations with Dirichlet boundary conditions, 2021, 22113797, 104123, 10.1016/j.rinp.2021.104123
    2. Shu-Nan Li, Bing-Yang Cao, Anomalies of Lévy-based thermal transport from the Lévy-Fokker-Planck equation, 2021, 6, 2473-6988, 6868, 10.3934/math.2021402
    3. Z. Abdollahy, Y. Mahmoudi, A. Salimi Shamloo, M. Baghmisheh, Haar Wavelets Method for Time Fractional Riesz Space Telegraph Equation with Separable Solution, 2022, 89, 00344877, 81, 10.1016/S0034-4877(22)00011-8
    4. Yu Li, Boxiao Li, High-order exponential integrators for the Riesz space-fractional telegraph equation, 2024, 128, 10075704, 107607, 10.1016/j.cnsns.2023.107607
    5. Pooja Yadava, Shah Jahana, Bell wavelet-based numerical algorithm for fractional-order (1$$+$$1)-dimensional telegraph equations involving derivative in Caputo sense, 2025, 13, 2195-268X, 10.1007/s40435-024-01572-8
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1617) PDF downloads(79) Cited by(0)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog