Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Research article

Age estimation algorithm based on deep learning and its application in fall detection

  • With the continuous development and progress of society, age estimation based on deep learning has gradually become a key link in human-computer interaction. Widely combined with other fields of application, this paper performs a gradient division of human fall behavior according to the age estimation of the human body, a complete priority detection of the key population, and a phased single aggregation backbone network VoVNetv4 was proposed for feature extraction. At the same time, the regional single aggregation module ROSA module was constructed to encapsulate the feature module regionally. The adaptive stage module was used for feature smoothing. Consistent predictions for each task were made using the CORAL framework as a classifier and tasks were divided in binary. At the same time, a gradient two-node fall detection framework combined with age estimation was designed. The detection was divided into a primary node and a secondary node. In the first-level node, the age estimation algorithm based on VoVNetv4 was used to classify the population of different age groups. A face tracking algorithm was constructed by combining the key point matrices of humans, and the body processed by OpenPose with the central coordinates of the human face. In the secondary node, human age gradient information was used to detect human falls based on the AT-MLP model. The experimental results show that compared with Resnet-34, the MAE value of the proposed method decreased by 0.41. Compared with curriculum learning and the CORAL-CNN method, MAE value decreased by 0.17 relative to the RMSE value. Compared with other methods, the method in this paper was significantly lower, with a biggest drop of 0.51.

    Citation: Jiayi Yu, Ye Tao, Huan Zhang, Zhibiao Wang, Wenhua Cui, Tianwei Shi. Age estimation algorithm based on deep learning and its application in fall detection[J]. Electronic Research Archive, 2023, 31(8): 4907-4924. doi: 10.3934/era.2023251

    Related Papers:

    [1] Manal Alqhtani, Khaled M. Saad . Numerical solutions of space-fractional diffusion equations via the exponential decay kernel. AIMS Mathematics, 2022, 7(4): 6535-6549. doi: 10.3934/math.2022364
    [2] Shazia Sadiq, Mujeeb ur Rehman . Solution of fractional boundary value problems by ψ-shifted operational matrices. AIMS Mathematics, 2022, 7(4): 6669-6693. doi: 10.3934/math.2022372
    [3] Waleed Mohamed Abd-Elhameed, Youssri Hassan Youssri . Spectral tau solution of the linearized time-fractional KdV-Type equations. AIMS Mathematics, 2022, 7(8): 15138-15158. doi: 10.3934/math.2022830
    [4] Mariam Al-Mazmumy, Maryam Ahmed Alyami, Mona Alsulami, Asrar Saleh Alsulami, Saleh S. Redhwan . An Adomian decomposition method with some orthogonal polynomials to solve nonhomogeneous fractional differential equations (FDEs). AIMS Mathematics, 2024, 9(11): 30548-30571. doi: 10.3934/math.20241475
    [5] Sunyoung Bu . A collocation methods based on the quadratic quadrature technique for fractional differential equations. AIMS Mathematics, 2022, 7(1): 804-820. doi: 10.3934/math.2022048
    [6] Zahra Pirouzeh, Mohammad Hadi Noori Skandari, Kamele Nassiri Pirbazari, Stanford Shateyi . A pseudo-spectral approach for optimal control problems of variable-order fractional integro-differential equations. AIMS Mathematics, 2024, 9(9): 23692-23710. doi: 10.3934/math.20241151
    [7] Waleed Mohamed Abd-Elhameed, Omar Mazen Alqubori, Ahmed Gamal Atta . A collocation procedure for the numerical treatment of FitzHugh–Nagumo equation using a kind of Chebyshev polynomials. AIMS Mathematics, 2025, 10(1): 1201-1223. doi: 10.3934/math.2025057
    [8] Khalid K. Ali, Mohamed A. Abd El Salam, Mohamed S. Mohamed . Chebyshev fifth-kind series approximation for generalized space fractional partial differential equations. AIMS Mathematics, 2022, 7(5): 7759-7780. doi: 10.3934/math.2022436
    [9] Chang Phang, Abdulnasir Isah, Yoke Teng Toh . Poly-Genocchi polynomials and its applications. AIMS Mathematics, 2021, 6(8): 8221-8238. doi: 10.3934/math.2021476
    [10] K. Ali Khalid, Aiman Mukheimer, A. Younis Jihad, Mohamed A. Abd El Salam, Hassen Aydi . Spectral collocation approach with shifted Chebyshev sixth-kind series approximation for generalized space fractional partial differential equations. AIMS Mathematics, 2022, 7(5): 8622-8644. doi: 10.3934/math.2022482
  • With the continuous development and progress of society, age estimation based on deep learning has gradually become a key link in human-computer interaction. Widely combined with other fields of application, this paper performs a gradient division of human fall behavior according to the age estimation of the human body, a complete priority detection of the key population, and a phased single aggregation backbone network VoVNetv4 was proposed for feature extraction. At the same time, the regional single aggregation module ROSA module was constructed to encapsulate the feature module regionally. The adaptive stage module was used for feature smoothing. Consistent predictions for each task were made using the CORAL framework as a classifier and tasks were divided in binary. At the same time, a gradient two-node fall detection framework combined with age estimation was designed. The detection was divided into a primary node and a secondary node. In the first-level node, the age estimation algorithm based on VoVNetv4 was used to classify the population of different age groups. A face tracking algorithm was constructed by combining the key point matrices of humans, and the body processed by OpenPose with the central coordinates of the human face. In the secondary node, human age gradient information was used to detect human falls based on the AT-MLP model. The experimental results show that compared with Resnet-34, the MAE value of the proposed method decreased by 0.41. Compared with curriculum learning and the CORAL-CNN method, MAE value decreased by 0.17 relative to the RMSE value. Compared with other methods, the method in this paper was significantly lower, with a biggest drop of 0.51.



    A wide range of fields rely on Chebyshev polynomials (CPs). Some CPs are famously special polynomials of Jacobi polynomials (JPs). We can extract four kinds of CPs from JPs. They were employed in many applications; see [1,2,3,4]. However, others can be considered special types of generalized ultraspherical polynomials; see [5,6]. Some contributions introduced and utilized other specific kinds of generalized ultraspherical polynomials. In the sequence of papers [7,8,9,10], the authors utilized CPs of the fifth- and sixth-kinds to treat different types of differential equations (DEs). Furthermore, the eighth-kind CPs were utilized in [11,12] to solve other types of DEs.

    Several phenomena that arise in different applied sciences can be better understood by delving into fractional calculus, which studies the integration and derivatives for non-integer orders. When describing important phenomena, fractional differential equations (FDEs) are vital. There are many examples of FDEs applications; see, for instance, [13,14,15]. Because it is usually not feasible to find analytical solutions for these equations, numerical methods are relied upon. Several methods were utilized to tackle various types of FDEs. Here are some techniques used to treat several FDEs: the Adomian decomposition method [16], a finite difference scheme [17], generalized finite difference method [18], Gauss collocation method [19], the inverse Laplace transform [20], the residual power series method [21], multi-step methods [22], Haar wavelet in [23], matrix methods in [24,25,26], collocation methods in [27,28,29,30], Galerkin methods in [31,32,33], and neural networks method in [34].

    Among the essential FDEs are the Rayleigh-Stokes equations. The fractional Rayleigh-Stokes equation is a mathematical model for the motion of fluids with fractional derivatives. This equation is used in many areas of study, such as non-Newtonian fluids, viscoelastic fluids, and fluid dynamics. Many contributions were devoted to investigating the types of Rayleigh-Stokes from a theoretical and numerical perspective. Theoretically, one can consult [35,36,37]. Several numerical approaches were followed to solve these equations. In [38], the authors used a finite difference method for the fractional Rayleigh-Stokes equation (FRSE). In [39], a computational method for two-dimensional FRSE is followed. The authors of [40] used a finite volume element algorithm to treat a nonlinear FRSE. In [41], a numerical method is applied to handle a type of Rayleigh-Stokes problem. Discrete Hahn polynomials treated variable-order two-dimensional FRSE in [42]. The authors of [43] numerically solved the FRSE.

    The significance of spectral approaches in engineering and fluid dynamics has been better understood in recent years, and this trend is being further explored in the applied sciences [44,45,46]. In these techniques, approximations to integral and differential equations are assumed by expanding a variety of polynomials, which are frequently orthogonal. The three spectral techniques used most often are the collocation, tau, and Galerkin methods. The optimal spectral approach to solving the provided equation depends on the nature of the DE and the governing conditions that regulate it. The three spectral methods use distinct trial and test functions. In the Galerkin method, the test and trial functions are chosen so that each basis function member meets the given DE's underlying constraints; see [47,48]. The tau method is not limited to a specific set of basis functions like the Galerkin approach. This is why it solves many types of DEs; see [49]. Among the spectral methods, the collocation method is the most suitable; see, for example, [50,51].

    In his seminal papers [52,53], Shen explored a new idea to apply the Galerkin method. He selected orthogonal combinations of Legendre and first-kind CPs to solve the second- and fourth-order DEs. The Galerkin approach was used to discretize the problems with their governing conditions. To address the even-order DEs, the authors of [54] employed a generalizing combination to solve even-order DEs.

    This paper's main contribution and significance is the development of a new Galerkin approach for treating the FRSE. The suggested technique has the advantage that it yields accurate approximations by picking a small number of the retained modes of the selected Galerkin basis functions.

    The current paper has the following structure. Section 2 presents some preliminaries and essential relations. Section 3 describes a Galerkin approach for treating FRSE. A comprehensive study on the convergence analysis is studied in Section 4. Section 5 is devoted to presenting some illustrative examples to show the efficiency and applicability of our proposed method. Section 6 reports some conclusions.

    This section defines the fractional Caputo derivative and reviews some of its essential properties. Next, we gather significant characteristics of the second-kind CPs. This paper will use some orthogonal combinations of the second-kind CPs to solve the FRSE.

    Definition 2.1. In Caputo's sense, the fractional-order derivative of the function ξ(s) is defined as [55]

    Dαξ(s)=1Γ(pα)s0(st)pα1ξ(p)(t)dt,α>0,s>0, p1<α<p,pN. (2.1)

    For Dα with p1<α<p,pN, the following identities are valid:

    DαC=0,C is a constant, (2.2)
    Dαsp={0,if pN0andp<α,p!Γ(pα+1)spα,if pN0andpα, (2.3)

    where N={1,2,...} and N0={0,1,2,}, and α is the ceiling function.

    The shifted second-kind CPs Uj(t) are orthogonal regarding the weight function ω(t)=t(τt) in the interval [0,τ] and defined as [56,57]

    Uj(t)=jr=0λr,jtr,j0, (2.4)

    where

    λr,j=22r(1)j+r(j+r+1)!τr(2r+1)!(jr)!, (2.5)

    with the following orthogonality relation [56]:

    τ0ω(t)Um(t)Un(t)dt=qm,n, (2.6)

    where

    qm,n=πτ28{1,if  m=n,0,if  mn. (2.7)

    {Um(t)}m0 can be generated by the recursive formula:

    Um(t)=2(2tτ1)Um1(t)Um2(t),U0(t)=1,U1(t)=2tτ1,m2. (2.8)

    The following theorem that presents the derivatives of Um(t) is helpful in what follows.

    Theorem 2.1. [56] For all jn, the following formula is valid:

    DnUj(t)=(4τ)njnp=0 (p+j+n)even(p+1)(n)12(jnp)(12(jnp))!(12(j+n+p+2))1nUp(t). (2.9)

    The following particular formulas of (2.9) give expressions for the first- and second-order derivatives.

    Corollary 2.1. The following derivative formulas are valid:

    DUj(t)=4τj1p=0 (p+j)odd(p+1)Up(t),j1, (2.10)
    D2Uj(t)=4τ2j2p=0 (p+j)even(p+1)(jp)(j+p+2)Up(t),j2. (2.11)

    Proof. Special cases of Theorem 2.1.

    This section is devoted to analyzing a Galerkin approach to solve the following FRSE [38,58]:

    vt(x,t)Dαt[avxx(x,t)]bvxx(x,t)=S(x,t),0<α<1, (3.1)

    governed by the following constraints:

    v(x,0)=v0(x),0<x<, (3.2)
    v(0,t)=v1(t),v(,t)=v2(t),0<tτ, (3.3)

    where a and b are two positive constants and S(x,t) is a known smooth function.

    Remark 3.1. The well-posedness and regularity of the fractional Rayleigh-Stokes problem are discussed in detail in [36].

    We choose the trial functions to be

    φi(x)=x(x)Ui(x). (3.4)

    Due to (2.6), it can be seen that {φi(x)}i0 satisfies the following orthogonality relation:

    0ˆω(x)φi(x)φj(x)dx=ai,j, (3.5)

    where

    ai,j=π28{1,if  i=j,0,if  ij, (3.6)

    and ˆω(x)=1x32(x)32.

    Theorem 3.1. The second-derivative of φi(x) can be expressed explicitly in terms of Uj(x) as

    d2φi(x)dx2=ij=0μj,iUj(x), (3.7)

    where

    μj,i=2{j+1,ifi>j, and(i+j)even,12(i+1)(i+2),ifi=j,0,otherwise. (3.8)

    Proof. Based on the basis functions in (3.4), we can write

    d2φi(x)dx2=2Ui(x)+2(2x)dUi(x)dx+(xx2)d2Ui(x)dx2. (3.9)

    Using Corollary 2.1, Eq (3.9) may be rewritten as

    d2φi(x)dx2=2Ui(x)+8j1p=0 (p+j)odd(p+1)Up(x)16j1p=0 (p+j)odd(p+1)xUp(x)+4j2p=0 (p+j)even(p+1)(jp)(j+p+2)xUp(x)(2)2j2p=0 (p+j)even(p+1)(jp)(j+p+2)x2Up(x). (3.10)

    With the aid of the recurrence relation (2.8), the following recurrence relation for Ui(x) holds:

    xUi(x)=4[Ui+1(x)+2Ui(x)+Ui1(x)]. (3.11)

    Moreover, the last relation enables us to write the following relation:

    x2Ui(x)=24[4Ui+1(x)+6Ui(x)+4Ui1(x)+Ui2(x)+Ui+2(x)]. (3.12)

    If we insert relations (3.11) and (3.12) into relation (3.10), and perform some computations, then we get

    d2φi(x)dx2=ij=0μj,iUj(x), (3.13)

    where

    μj,i=2{j+1,if  i>j, and (i+j)even,12(i+1)(i+2),if i=j,0,otherwise. (3.14)

    This completes the proof.

    Consider the FRSE (3.1), governed by the conditions: v(0,t)=v(,t)=0.

    Now, consider the following spaces:

    PM(Ω)=span{φi(x)Uj(t):i,j=0,1,,M},XM(Ω)={v(x,t)PM(Ω):v(0,t)=v(,t)=0}, (3.15)

    where Ω=(0,)×(0,τ].

    The approximate solution ˆv(x,t)XM(Ω) may be expressed as

    ˆv(x,t)=Mi=0Mj=0cijφi(x)Uj(t)=φCUT, (3.16)

    where

    φ=[φ0(x),φ1(x),,φM(x)],
    U=[U0(t),U1(t),,UM(t)],

    and C=(cij)0i,jM is the unknown matrix to be determined whose order is (M+1)×(M+1).

    The residual R(x,t) of Eq (3.1) may be calculated to give

    R(x,t)=ˆvt(x,t)Dαt[aˆvxx(x,t)]bˆvxx(x,t)S(x,t). (3.17)

    The philosophy in applying the Galerkin method is to find ˆv(x,t)XM(Ω), such that

    (R(x,t),φr(x)Us(t))ˉω(x,t)=0,0rM,0sM1, (3.18)

    where ˉω(x,t)=ˆω(x)ω(t). The last equation may be rewritten as

    Mi=0Mj=0cij(φi(x),φr(x))ˆω(x)(dUj(t)dt,Us(t))ω(t)aMi=0Mj=0cij(d2φi(x)dx2,φr(x))ˆω(x)(DαtUj(t),Us(t))ω(t)bMi=0Mj=0cij(d2φi(x)dx2,φr(x))ˆω(x)(Uj(t),Us(t))ω(t)=(S(x,t),φr(x)Us(t))ˉω(x,t). (3.19)

    In matrix form, Eq (3.19) can be written as

    ATCBaHTCKbHTCQ=G, (3.20)

    where

    G=(gr,s)(M+1)×M,grs=(S(x,t),φr(x)Us(t))ˉω(x,t), (3.21)
    A=(ai,r)(M+1)×(M+1),ai,r=(φi(x),φr(x))ˆω(x), (3.22)
    B=(bj,s)(M+1)×M,bj,s=(dUj(t)dt,Us(t))ω(t), (3.23)
    H=(hir)(M+1)×(M+1),hi,r=(d2φi(x)dx2,φr(x))ˆω(x), (3.24)
    K=(kj,s)(M+1)×M,kj,s=(DαtUj(t),Us(t))ω(t), (3.25)
    Q=(qj,s)(M+1)×M,qj,s=(Uj(t),Us(t))ω(t). (3.26)

    Moreover, (3.2) implies that

    Mi=0Mj=0cijai,rUj(0)=(v(x,0),φr(x))ˆω(x),0rM. (3.27)

    Now, Eq (3.20) along with (3.27) constitutes a system of algebraic equations of order (M+1)2, that may be solved using a suitable numerical procedure.

    Now, the derivation of the formulas of the entries of the matrices A, B, H, K and Q are given in the following theorem.

    Theorem 3.2. The following definite integral formulas are valid:

    (a)0ˆω(x)φi(x)φr(x)dx=ai,r,(b)0ˆω(x)d2φi(x)dx2φr(x)dx=hi,r,(c)τ0ω(t)Uj(t)Us(t)dt=qj,s,(d)τ0ω(t)dUj(t)dtUs(t)dt=bj,s,(e)τ0ω(t)[DαtUj(t)]Us(t)dt=kj,s, (3.28)

    where qj,s and ai,r are given respectively in Eqs (2.6) and (3.6). Also, we have

    hi,r=ij=0μj,iγj,r, (3.29)
    bj,s=πτ2j1p=0 (p+j+1)even(p+1)δp,s, (3.30)
    γj,r={π(r+1),ifjr and(r+j)even,π(j+1),ifj<r and(r+j)even,0,otherwise, (3.31)
    δp,s={1,ifp=s,0,ifps, (3.32)

    and

    kj,s=jk=1π4k1(s+1)k!τ2α(1)j+k+s(j+k+1)!Γ(kα+32)(2k+1)!(jk)!Γ(kα+1)× 3˜F2(s,s+2,α+k+3232,α+k+3|1), (3.33)

    where  3˜F2 is the regularized hypergeometric function [59].

    Proof. To find the elements hi,r: Using Theorem 3.1, one has

    hi,r=0ˆω(x)d2φi(x)dx2φr(x)dx=ij=0μj,i0ˆω(x)Uj(x)φr(x)dx. (3.34)

    Now, 0ˆω(x)Uj(x)φr(x)dx can be calculated to give the following result:

    0ˆω(x)Uj(x)φr(x)dx=γj,r, (3.35)

    and therefore, we get the following desired result:

    hi,r=ij=0μj,iγj,r. (3.36)

    To find the elements bj,s: Formula (2.10) along with the orthogonality relation (2.6) helps us to write

    bj,s=τ0ω(t)dUj(t)dtUs(t)dt=πτ2j1p=0 (p+j)odd(p+1)δp,s. (3.37)

    To find kj,s: Using property (2.3) together with (2.4), one can write

    kj,s=τ0ω(t)[DαtUj(t)]Us(t)dt=jk=122kk!(1)j+k(j+k+1)!(2k+1)!τk(jk)!Γ(kα+1)τ0Us(t)tkαω(t)dt=jk=122kk!(1)j+k(j+k+1)!(2k+1)!(jk)!(kα)!sn=0π22n1τ2α(1)n+sΓ(n+s+2)Γ(k+nα+32)(2n+1)!(sn)!Γ(k+nα+3). (3.38)

    If we note the following identity:

    sn=0π22n1τ2α(1)n+s(n+s+1)!Γ(k+nα+32)(2n+1)!(sn)!Γ(k+nα+3)=14π(1)s(s+1)τα+2 Γ(kα+32)3˜F2(s,s+2,α+k+3232,α+k+3|1), (3.39)

    then, we get

    kj,s=jk=1π4k1(s+1)k!τ2α(1)j+k+sΓ(j+k+2)Γ(kα+32)Γ(2k+2)(jk)!Γ(kα+1)× 3˜F2(s,s+2,α+k+3232,α+k+3|1). (3.40)

    Theorem 3.2 is now proved.

    Remark 3.2. The following algorithm shows our proposed Galerkin technique, which outlines the necessary steps to get the approximate solutions.

    Algorithm 1 Coding algorithm for the proposed technique
    Input a,b,,τ,α,v0(x), and S(x,t).
    Step 1. Assume an approximate solution ˆv(x,t) as in (3.16).
    Step 2. Apply Galerkin method to obtain the system in (3.20) and (3.27).
    Step 4. Use Theorem 3.2 to get the elements of matrices A,B,H,K and Q.
    Step 5. Use NDsolve command to solve the system in (3.20) and (3.27) to get cij.
    Output ˆv(x,t).

    Remark 3.3. Based on the following substitution:

    v(x,t):=y(x,t)+(1x)v(0,t)+xv(,t), (3.41)

    the FRSE (3.1) with non-homogeneous boundary conditions will convert to homogeneous ones y(0,t)=y(,t)=0.

    In this section, we study the error bound for the two cases corresponding to the 1-D and 2-D Chebyshev-weighted Sobolev spaces.

    Assume the following Chebyshev-weighted Sobolev spaces:

    Hα,mω(t)(I1)={u:Dα+ktuL2ω(t)(I1),0km}, (4.1)
    Ymˆω(x)(I2)={u:u(0)=u()=0 and DkxuL2ˆω(x)(I2),0km}, (4.2)

    where I1=(0,τ) and I2=(0,) are quipped with the inner product, norm, and semi-norm

    (u,v)Hα,mω(t)=mk=0(Dα+ktu,Dα+ktv)L2ω(t),||u||2Hα,mω(t)=(u,u)Hα,mω(t),|u|Hα,mω(t)=||Dα+mtu||L2ω(t),(u,v)Ymˆω(x)=mk=0(Dkxu,Dkxv)L2ˆω(x),||u||2Ymˆω(x)=(u,u)Ymˆω(x),|u|Ymˆω(x)=||Dmxu||L2ˆω(x), (4.3)

    where 0<α<1 and mN.

    Also, assume the following two-dimensional Chebyshev-weighted Sobolev space:

    Hr,sˉω(x,t)(Ω)={u:u(0,t)=u(,t)=0 and α+p+quxptα+qL2ˉω(x,t)(Ω),rp0,sq0}, (4.4)

    equipped with the norm and semi-norm

    ||u||Hr,sˉω(x,t)=(rp=0sq=0||α+p+quxptα+q||2L2ˉω(x,t))12,|u|Hr,sˉω(x,t)=||α+r+suxrtα+s||L2ˉω(x,t), (4.5)

    where 0<α<1 and r,sN.

    Lemma 4.1. [60] For nN, n+r>1, and n+s>1, where r,sR are any constants, we have

    Γ(n+r)Γ(n+s)or,snnrs, (4.6)

    where

    or,sn=exp(rs2(n+s1)+112(n+r1)+(rs)2n). (4.7)

    Theorem 4.1. Suppose 0<α<1, and ˉη(t)=Mj=0ˆηjUj(t) is the approximate solution of η(t)Hα,mω(t)(I1). Then, for 0kmM+1, we get

    ||Dα+kt(η(t)ˆη(t))||L2ω(t)τmkM54(mk)|η(t)|2Hα,mω(t), (4.8)

    where AB indicates the existence of a constant ν such that AνB.

    Proof. The definitions of η(t) and ˆη(t) allow us to have

    ||Dα+kt(η(t)ˆη(t))||2L2ω(t)=n=M+1|ˆηn|2||Dα+ktUn(t)||2L2ω(t)=n=M+1|ˆηn|2||Dα+ktUn(t)||2L2ω(t)||Dα+mtUn(t)||2L2ω(t)||Dα+mtUn(t)||2L2ω(t)||Dα+ktUM+1(t)||2L2ω(t)||Dα+mtUM+1(t)||2L2ω(t)|η(t)|2Hα,mω(t). (4.9)

    To estimate the factor ||Dα+ktUM+1(t)||2L2ω(t)||Dα+mtUM+1(t)||2L2ω(t), we first find ||Dα+ktUM+1(t)||2L2ω(t).

    ||Dα+ktUM+1(t)||2L2ω(t)=τ0Dα+ktUM+1(t)Dα+ktUM+1(t)ω(t)dt. (4.10)

    Equation (2.3) along with (2.4) allows us to write

    Dα+ktUM+1(t)=M+1r=k+1λr,M+1r!Γ(rkα+1)trkα, (4.11)

    and accordingly, we have

    ||Dα+ktUM+1(t)||2L2ω(t)=M+1r=k+1λ2r,M+1(r!)2Γ2(rkα+1)τ0t2(rkα)+12(τt)12dt=M+1r=k+1λ2r,M+1τ2(rkα+1)π(r!)2Γ(2(rkα)+32)2Γ2(rkα+1)Γ(2(rkα)+3). (4.12)

    The following inequality can be obtained after applying the Stirling formula [44]:

    Γ2(r+1)Γ(2(rkα)+32)Γ2(rkα+1)Γ(2(rkα)+3)r2(k+α)(rk)32. (4.13)

    By virtue of the Stirling formula [44] and Lemma 4.1, ||Dα+ktUM+1(t)||2L2ω(t) can be written as

    ||Dα+ktUM+1(t)||2L2ω(t)λτ2(Mkα+2)(M+1)2(k+α)(Mk+1)32M+1r=k+11=λτ2(Mkα+2)(M+1)2(k+α)(Mk+1)12=λτ2(Mkα+2)(Γ(M+2)Γ(M+1))2(k+α)(Γ(Mk+2)Γ(Mk+1))12τ2(Mkα+2)M2(k+α)(Mk)12, (4.14)

    where λ=max0rM+1{λ2r,M+1π2}.

    Similarly, we have

    ||Dα+mtUM+1(t)||2L2ω(t)τ2(Mmα+2)M2(m+α)(Mm)12, (4.15)

    and accordingly, we have

    ||Dα+ktUM+1(t)||2L2ω(t)||Dα+mtUM+1(t)||L2ω(t)τ2(mk)M2(km)(MkMm)12=τ2(mk)M2(mk)(Γ(Mk+1)Γ(Mm+1))12τ2(mk)M52(mk). (4.16)

    Inserting Eq (4.16) into Eq (4.9), one gets

    ||Dα+kt(η(t)ˆη(t))||2L2ω(t)τ2(mk)M52(mk)|η(t)|2Hα,mω(t). (4.17)

    Therefore, we get the desired result.

    Theorem 4.2. Suppose ˉζ(x)=Mi=0ˆζiφi(x), is the approximate solution of ζM(x)Ymˆω(x)(I2). Then, for 0kmM+1, we get

    ||Dkx(ζ(x)ˉζ(x))||L2ˆω(x)mkM14(mk)|ζ(x)|2Ymˆω(x). (4.18)

    Proof. At first, based on the definitions of ζ(x) and ˉζ(x), one has

    ||Dkx(ζ(x)ˉζ(x)||2L2ˆω(x)=n=M+1|ˆζn|2||Dkxφn(x)||2L2ˆω(x)=n=M+1|ˆζn|2||Dkxφn(x)||2L2ˆω(x)||Dmxφn(x)||2L2ˆω(x)||Dmxφn(x)||2L2ˆω(x)||DkxφM+1(x)||2L2ˆω(x)||DmxφM+1(x)||2L2ˆω(x)|ζ(x)|2Ymˆω(x). (4.19)

    Now, we have

    DkxφM+1(x)=M+1r=kλr,M+1Γ(r+2)Γ(rk+2)xrk+1M+1r=kλr,M+1Γ(r+3)Γ(rk+3)xrk+2, (4.20)

    and therefore, ||DkxφM+1(x)||2L2ˆω(x) can be written as

    ||DkxφM+1(x)||2L2ˆω(x)=M+1r=k2(rk)λ2r,M+12πΓ2(r+2)Γ(2(rk+1)12)Γ2(rk+2)Γ(2(rk+1)1)+M+1r=k2(rk+1)λ2r,M+12πΓ2(r+3)Γ(2(rk+2)12)Γ2(rk+3)Γ(2(rk+2)1). (4.21)

    The application of the Stirling formula [44] leads to

    Γ2(r+2)Γ(2(rk+1)12)Γ2(rk+2)Γ(2(rk+1)1)r2k(rk)12,Γ2(r+3)Γ(2(rk+2)12)Γ2(rk+3)Γ(2(rk+2)1)r2k(rk)12, (4.22)

    and hence, we get

    ||DkxφM+1(x)||2L2ˆω(x)2(rk+1)M2k(Mk)32. (4.23)

    Finally, we get the following estimation:

    ||DkxφM+1(x)||2L2ˆω(x)||DmxφM+1(x)||2L2ˆω(x)2(mk)M12(mk). (4.24)

    At the end, we get

    ||Dkx(ζ(x)ˉζ(x))||L2ˆω(x)mkM14(mk)|ζ(x)|2Ymˆω(x). (4.25)

    Theorem 4.3. Given the following assumptions: α=0, 0prM+1, and the approximation to v(x,t)Hr,s¨ω(Ω) is ˆv(x,t). As a result, the estimation that follows is applicable:

    ||pxp(v(x,t)ˆv(x,t))||L2ˉω(x,t)rpM14(rp)|v(x,t)|Hr,0ˉω(x,t). (4.26)

    Proof. According to the definitions of v(x,t) and ˆv(x,t), one has

    v(x,t)ˆv(x,t)=Mi=0j=M+1cijφi(x)Uj(t)+i=M+1j=0cijφi(x)Uj(t)Mi=0j=0cijφi(x)Uj(t)+i=M+1j=0cijφi(x)Uj(t). (4.27)

    Now, applying the same procedures as in Theorem 4.2, we obtain

    ||pxp(v(x,t)ˆv(x,t))||L2ˉω(x,t)rpM14(rp)|v(x,t)|Hr,0ˉω(x,t). (4.28)

    Theorem 4.4. Given the following assumptions: α=0, 0qsM+1, and the approximation to v(x,t)Hr,s¨ω(Ω) is ˆv(x,t). As a result, the estimation that follows is applicable:

    ||qtq(v(x,t)ˆv(x,t))||L2ˉω(x,t)τsqM54(sq)|v(x,t)|H0,sˉω(x,t). (4.29)

    Theorem 4.5. Let ˆv(x,t) be the approximate solution of v(x,t)Hr,sˉω(x,t)(Ω), and assume that 0<α<1. Consequently, for 0prM+1, and 0qsM+1, we obtain

    ||α+qtα+q[pxp(v(x,t)ˆv(x,t))]||L2ˉω(x,t)τsqrpM14[5(sq)+rp)]|v(x,t)|Hr,0ˉω(x,t). (4.30)

    Proof. The proofs of Theorems 4.4 and 4.5 are similar to the proof of Theorem 4.3.

    Theorem 4.6. Let R(x,t) be the residual of Eq (3.1), then ||R(x,t)||L2ˉω(x,t)0 as M.

    Proof. ||R(x,t)||L2ˉω(x,t) of Eq (3.28) can be written as

    ||R(x,t)||L2ˉω(x,t)=||ˆvt(x,t)Dαt[aˆvxx(x,t)]bˆvxx(x,t)S(x,t)||L2ˉω(x,t)||t(v(x,t)ˆv(x,t))||L2ˉω(x,t)a||αtα[2x2(v(x,t)ˆv(x,t))]||L2ˉω(x,t)b||2x2(v(x,t)ˆv(x,t))||L2ˉω(x,t). (4.31)

    Now, the application of Theorems 4.3–4.5 leads to

    ||R(x,t)||L2ˉω(x,t)τs1M54(s1)|v(x,t)|H0,sˉω(x,t)aτsr2M14[5s+r2)]|v(x,t)|Hr,0ˉω(x,t)br2M14(r2)|v(x,t)|Hr,0ˉω(x,t). (4.32)

    Therefore, it is clear that ||R(x,t)||L2ˉω(x,t)0 as M.

    This section will compare our shifted second-kind Galerkin method (SSKGM) with other methods. Three test problems will be presented in this regard.

    Example 5.1. [38] Consider the following equation:

    vt(x,t)Dαt[vxx(x,t)]vx(x,t)=S(x,t),0<α<1, (5.1)

    where

    S(x,t)=2tx(x)[(5x25x+2)(6Γ(3α)t1α+3t)x2(x)2], (5.2)

    governed by (3.2) and (3.3). Problem (5.1) has the exact solution: u(x,t)=x3(x)3t2.

    In Table 1, we compare the L2 errors of the SSKGM with that obtained in [38] at =τ=1. Table 2 reports the amount of time for which a central processing unit (CPU) was used for obtaining results in Table 1. These tables show the high accuracy of our method. Figure 1 illustrates the absolute errors (AEs) at different values of α at M=4 when =τ=1. Figure 2 illustrates the AEs at different values of α at M=4 when =3, and τ=2. Figure 3 shows the AEs at different α at M=4 when =10, and τ=5.

    Table 1.  Comparison of the L2 errors for Example 5.1.
    Our method Method in [38]
    α M=4 h=15000, T=1128 T=15000, h=1128
    0.1 1.22946×1016 1.1552×106 1.4408×106
    0.5 2.40485×1016 1.0805×106 1.4007×106
    0.9 8.83875×1017 8.1511×107 1.3682×106

     | Show Table
    DownLoad: CSV
    Table 2.  CPU time used for Table 1.
    CPU time of our method CPU time of method in [38]
    α M=4 h=15000, T=1128 T=15000, h=1128
    0.1 30.891 16.828 67.243
    0.5 35.953 16.733 67.470
    0.9 31.078 16.672 67.006

     | Show Table
    DownLoad: CSV
    Figure 1.  The AEs at different values of α for Example 5.1.
    Figure 2.  The AEs at different values of α for Example 5.1.
    Figure 3.  The AEs at different values of α for Example 5.1.

    Example 5.2. [38] Consider the following equation:

    vt(x,t)Dαt[vx(x,t)]vx(x,t)=S(x,t),0<α<1, (5.3)

    where

    S(x,t)=u2+sin(πx)[2π2Γ(3α)t2α+π2t2+2t]t4sin2(πx),

    governed by (3.2) and (3.3). Problem (5.3) has the exact solution: u(x,t)=t2sin(πx).

    Table 3 compares the L2 errors of the SSKGM with those obtained by the method in [38] at =τ=1. This table shows that our results are more accurate. Table 4 reports the CPU time used for obtaining results in Table 3. Moreover, Figure 4 sketches the AEs at different values M when α=0.7, and =τ=1. Table 5 presents the maximum AEs at α=0.8 and M=8 when =τ=1. Figure 5 sketches the AEs at different α for M=10, =3 and τ=1.

    Table 3.  Comparison of the L2 errors for Example 5.2.
    Our method Method in [38]
    α M=8 h=15000, T=1128 T=15000, h=1128
    0.1 4.97952×1010 9.1909×105 5.1027×105
    0.5 5.85998×1010 8.4317×105 4.4651×105
    0.9 4.62473×1010 6.2864×105 4.0543×105

     | Show Table
    DownLoad: CSV
    Table 4.  CPU time used for Table 3.
    CPU time of our method CPU time of method in [38]
    α M=8 h=15000, T=1128 T=15000, h=1128
    0.1 119.061 20.095 70.952
    0.5 118.001 19.991 71.117
    0.9 121.36 19.908 71.153

     | Show Table
    DownLoad: CSV
    Figure 4.  The AEs at different values of M when α=0.7 for Example 5.2.
    Table 5.  The maximum AEs of Example 5.2 at α=0.8,M=8.
    x t=0.2 t=0.4 t=0.6 t=0.8
    0.1 7.82271×1012 9.69765×1011 1.48667×1010 2.65085×1010
    0.2 4.24386×1011 6.82703×1011 2.34553×1010 5.1182×1010
    0.3 4.48637×1011 6.28317×1011 2.69028×1010 4.28024×1010
    0.4 2.57132×1011 5.02944×1011 1.45928×1010 3.26067×1010
    0.5 6.59974×1011 1.22092×1011 4.38141×1010 7.16844×1010
    0.6 1.11684×1011 1.04731×1010 1.78963×1010 2.80989×1010
    0.7 4.19906×1011 7.33454×1011 2.70484×1010 4.25895×1010
    0.8 2.97515×1011 1.16961×1010 2.70621×1010 4.63116×1010
    0.9 1.0014×1011 8.68311×1011 1.43244×1010 2.71797×1010

     | Show Table
    DownLoad: CSV
    Figure 5.  The AEs at different values of α when M=10 for Example 5.2.

    Example 5.3. Consider the following equation:

    vt(x,t)Dαt[vx(x,t)]vx(x,t)=S(x,t),0<α<1, (5.4)

    where

    S(x,t)=sin(2πx)(4π2Γ(5)Γ(5α)t4α+4π2t4+4t3),

    governed by (3.2) and (3.3). The exact solution of this problem is: u(x,t)=t4sin(2πx).

    Table 6 presents the maximum AEs at α=0.5 and M=9 when =τ=1. Figure 6 sketches the AEs at different M and α=0.9 when =τ=1.

    Table 6.  The maximum AEs of Example 5.3 at α=0.5,M=9.
    x t=0.2 t=0.4 t=0.6 t=0.8
    0.1 4.36556×108 6.4783×108 4.6896×108 3.07364×108
    0.2 3.41326×108 5.29923×108 2.18292×108 6.69242×108
    0.3 4.74795×108 5.70081×105 1.14413×107 1.69409×107
    0.4 8.64528×108 1.11593×107 1.6302×107 1.69931×107
    0.5 2.03915×108 2.95873×108 2.78558×108 2.04027×1011
    0.6 8.36608×108 1.07773×107 1.59124×107 1.70274×107
    0.7 1.53354×108 1.00892×108 7.07611×108 1.68702×107
    0.8 3.88458×108 6.00347×108 2.85788×108 6.72342×108
    0.9 3.00922×108 4.49878×108 2.82622×108 3.06797×108

     | Show Table
    DownLoad: CSV
    Figure 6.  The AEs at different values of M when α=0.9 for Example 5.3.

    Example 5.4. [61] Consider the following equation:

    vt(x,t)Dαt[vx(x,t)]vx(x,t)=S(x,t),0<α<1, (5.5)

    governed by the following constraints:

    v(x,0)=0,0<x<1, (5.6)
    v(0,t)=tγ+2,v(1,t)=etγ+2,0<t1, (5.7)

    where

    S(x,t)=ex(tγ+1(γt+2)Γ(γ+3)Γ(α+γ+3)tα+γ+2),

    and the exact solution of this problem is: u(x,t)=extγ+2. This problem is solved for the case γ=1. In Table 7, we compare the L2 errors of the SSKGM with that obtained in [61] at different values of α. This table shows the high accuracy of our method. Figure 7 illustrates the AE (left) and the approximate solution (right) at M=7 when α=0.6.

    Table 7.  Comparison of the L2 errors for Example 5.4.
    Our method Method in [61]
    α M=7 n=m=10
    0.1 1.39197×1011 2.176×109
    0.3 1.34984×1010 9.045×109
    0.5 5.87711×1011 1.516×108
    0.7 8.43536×1012 1.415×108
    0.9 7.46064×1012 5.749×109

     | Show Table
    DownLoad: CSV
    Figure 7.  The AE (left) and the approximate solution (right) at M=7 when α=0.6 for Example 5.4.

    This study presented a Galerkin algorithm technique for solving the FRSE using orthogonal combinations of the second-kind CPs. The Galerkin method converts the FRSE with its underlying conditions into a matrix system whose entries are given explicitly. A suitable algebraic algorithm may be utilized to solve such a system, and by chance, the approximate solution can be obtained. We showcased the effectiveness and precision of the algorithm through a comprehensive study of the error analysis and by presenting multiple numerical examples. We think the proposed method can be applied to other types of FDEs. As an expected future work, we aim to employ this paper's developed theoretical results and suitable spectral methods to treat some other problems.

    W. M. Abd-Elhameed: Conceptualization, Methodology, Validation, Formal analysis, Funding acquisition, Investigation, Project administration, Supervision, Writing–Original draft, Writing–review & editing. A. M. Al-Sady: Methodology, Validation, Writing–Original draft; O. M. Alqubori: Methodology, Validation, Investigation; A. G. Atta: Conceptualization, Methodology, Validation, Formal analysis, Visualization, Software, Writing–Original draft, Writing–review & editing. All authors have read and agreed to the published version of the manuscript.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was funded by the University of Jeddah, Jeddah, Saudi Arabia, under grant No. (UJ-23-FR-70). Therefore, the authors thank the University of Jeddah for its technical and financial support.

    The authors declare that they have no competing interests.



    [1] A. F. Bekhit, Introduction to computer vision, in Computer Vision and Augmented Reality in iOS, 1 (2022), 1−20. https://doi.org/10.1007/978-1-4842-7462-0_1
    [2] J. Han, L. Shao, D. Xu, J. Shotton, Enhanced computer vision with microsoft kinect sensor: a review, IEEE Trans. Cybern., 43 (2013), 1318−1334. https://doi.org/10.1109/TCYB.2013.2265378 doi: 10.1109/TCYB.2013.2265378
    [3] Y. O. Sharrab, I. Alsmadi, N. J. Sarhan, Towards the availability of video communication in artificial intelligence-based computer vision systems utilizing a multi-objective function, Cluster Comput., 25 (2022), 231−247. https://doi.org/10.1007/s10586-021-03391-4 doi: 10.1007/s10586-021-03391-4
    [4] N. Haering, P. L. Venetianer, A. Lipton, The evolution of video surveillance:an overview, Mach. Vision Appl., 19 (2008), 279−290. https://doi.org/10.1007/s00138-008-0152-0 doi: 10.1007/s00138-008-0152-0
    [5] Y. Zhang, H. Liu, Constraints and countermeasures of the new situation of population on the future development of higher vocational education−based on the analysis of the seventh national population survey (in Chinese), Educ. Vocation, 6 (2022), 12−20. https://doi.org/10.13615/j.cnki.1004-3985.2022.06.016 doi: 10.13615/j.cnki.1004-3985.2022.06.016
    [6] P. Li, Y. Hu, X. Wu, R. He, Z. Sun, Deep label refinement forage estimation, Pattern Recognit., 100 (2020), 107178. https://doi.org/10.1016/j.patcog.2019.107178 doi: 10.1016/j.patcog.2019.107178
    [7] Y. Yu, K. Tang, Y. Liu, A fine-tuning based approach for daily activity recognition between smart homes, Appl. Sci., 13 (2023). https://doi.org/10.3390/app13095706 doi: 10.3390/app13095706
    [8] Z. Li, F. Liu, W. Yang, S. Peng, J. Zhou, A survey of convolutional neural networks: analysis, applications, and prospects, IEEE Trans. Neural Networks Learn. Syst., 33 (2022), 6999−7019. https://doi.org/10.1109/TNNLS.2021.3084827 doi: 10.1109/TNNLS.2021.3084827
    [9] O. Guehairia, A. Ouamane, F. Dornaika, A. Taleb-Ahmed, Deep random forest for facial age estimation based on face images, in 2020 1st International Conference on Communications, Control Systems and Signal Processing (CCSSP), IEEE, (2020), 305−309. https://doi.org/10.1109/CCSSP49278.2020.9151621
    [10] M. M. Badr, A. M. Sarhan, R. M. Elbasiony, ICRL: using landmark ratios with cascade model for an accurate age estimation system using deep neural networks, J. Intell. Fuzzy Syst., 43 (2022), 72−79. https://doi.org/10.3233/JIFS-211267 doi: 10.3233/JIFS-211267
    [11] B. Zhang, Y. Bao, Age estimation of faces in videos using head pose estimation and convolutional neural networks, Sensors, 22 (2022), 4171. https://doi.org/10.3390/s22114171 doi: 10.3390/s22114171
    [12] S. Pramanik, H. A. B. Dahlan, Face age estimation using shortcut identity connection of convolutional neural network, Int. J. Adv. Comput. Sci. Appl., 13 (2022), 515−521. https://doi.org/10.14569/IJACSA.2022.0130459 doi: 10.14569/IJACSA.2022.0130459
    [13] K. Y. Chang, C. S. Chen, Y. P. Hung, Ordinal hyperplanes ranker with cost sensitivities for age estimation, in CVPR 2011, IEEE, (2011), 585−592. https://doi.org/10.1109/CVPR.2011.5995437
    [14] W. Wang, T. Ishikawa, H. Watanabe, Facial age estimation by curriculum learning, in 2020 IEEE 9th Global Conference on Consumer Electronics (GCCE), IEEE, (2020), 138−139. https://doi.org/10.1109/GCCE50665.2020.9291929
    [15] G. L. Santos, P. T. Endo, K. H. de Carvalho Monteiro, E. da Silva Rocha, I. Silva, T. Lynn, Accelerometer-based human fall detection using convolutional neural networks, Sensors, 19 (2019), 1644. https://doi.org/10.3390/s19071644 doi: 10.3390/s19071644
    [16] Z. Niu, M. Zhou, L. Wang, X. Gao, G. Hua, Ordinal regression with multiple output cnn for age estimation, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 4920−4928. https://doi.org/10.1109/CVPR.2016.532
    [17] A. Schmeling, G. Geserick, W. Reisinger, A. Olze, Age estimation, Forensic Sci. Int., 165 (2007), 178−181. https://doi.org/10.1016/j.forsciint.2006.05.016 doi: 10.1016/j.forsciint.2006.05.016
    [18] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 770−778. https://doi.org/10.1109/CVPR.2016.90
    [19] G. Huang, Z. Liu, L. van der Maaten, K. Q. Weinberger, Densely connected convolutional networks, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 4700−4708. https://doi.org/10.1109/CVPR.2017.243
    [20] O. Agbo-Ajala, S. Viriri, Deep learning approach for facial age classification:a survey of the state-of-the-art, Artif. Intell. Rev., 54 (2021), 179−213. https://doi.org/10.1007/s10462-020-09855-0 doi: 10.1007/s10462-020-09855-0
    [21] Y. Ma, Y. Tao, Y. Gong, W. Cui, B. Wang, Driver identification and fatigue detection algorithm based on deep learning, Math. Biosci. Eng., 20 (2023), 8162−8189. https://doi.org/10.3934/mbe.2023355 doi: 10.3934/mbe.2023355
  • This article has been cited by:

    1. Waleed Mohamed Abd-Elhameed, Omar Mazen Alqubori, On generalized Hermite polynomials, 2024, 9, 2473-6988, 32463, 10.3934/math.20241556
    2. Waleed Mohamed Abd-Elhameed, Omar Mazen Alqubori, Abdulrahman Khalid Al-Harbi, Mohammed H. Alharbi, Ahmed Gamal Atta, Generalized third-kind Chebyshev tau approach for treating the time fractional cable problem, 2024, 32, 2688-1594, 6200, 10.3934/era.2024288
    3. Waleed Mohamed Abd-Elhameed, Omar Mazen Alqubori, Ahmed Gamal Atta, A Collocation Procedure for Treating the Time-Fractional FitzHugh–Nagumo Differential Equation Using Shifted Lucas Polynomials, 2024, 12, 2227-7390, 3672, 10.3390/math12233672
    4. Waleed Mohamed Abd-Elhameed, Abdullah F. Abu Sunayh, Mohammed H. Alharbi, Ahmed Gamal Atta, Spectral tau technique via Lucas polynomials for the time-fractional diffusion equation, 2024, 9, 2473-6988, 34567, 10.3934/math.20241646
    5. M.H. Heydari, M. Razzaghi, M. Bayram, A numerical approach for multi-dimensional ψ-Hilfer fractional nonlinear Galilei invariant advection–diffusion equations, 2025, 68, 22113797, 108067, 10.1016/j.rinp.2024.108067
    6. Youssri Hassan Youssri, Waleed Mohamed Abd-Elhameed, Amr Ahmed Elmasry, Ahmed Gamal Atta, An Efficient Petrov–Galerkin Scheme for the Euler–Bernoulli Beam Equation via Second-Kind Chebyshev Polynomials, 2025, 9, 2504-3110, 78, 10.3390/fractalfract9020078
    7. Minilik Ayalew, Mekash Ayalew, Mulualem Aychluh, Numerical approximation of space-fractional diffusion equation using Laguerre spectral collocation method, 2025, 2661-3352, 10.1142/S2661335224500291
    8. Waleed Mohamed Abd-Elhameed, Omar Mazen Alqubori, Ahmed Gamal Atta, A Collocation Approach for the Nonlinear Fifth-Order KdV Equations Using Certain Shifted Horadam Polynomials, 2025, 13, 2227-7390, 300, 10.3390/math13020300
    9. Waleed Mohamed Abd-Elhameed, Omar Mazen Alqubori, Ahmed Gamal Atta, A collocation procedure for the numerical treatment of FitzHugh–Nagumo equation using a kind of Chebyshev polynomials, 2025, 10, 2473-6988, 1201, 10.3934/math.2025057
    10. M. Hosseininia, M.H. Heydari, D. Baleanu, M. Bayram, A hybrid method based on the classical/piecewise Chebyshev cardinal functions for multi-dimensional fractional Rayleigh–Stokes equations, 2025, 25, 25900374, 100541, 10.1016/j.rinam.2025.100541
    11. Waleed Mohamed Abd-Elhameed, Abdulrahman Khalid Al-Harbi, Omar Mazen Alqubori, Mohammed H. Alharbi, Ahmed Gamal Atta, Collocation Method for the Time-Fractional Generalized Kawahara Equation Using a Certain Lucas Polynomial Sequence, 2025, 14, 2075-1680, 114, 10.3390/axioms14020114
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2236) PDF downloads(96) Cited by(0)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog