Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Research article Special Issues

Financial market disruption and investor awareness: the case of implied volatility skew

  • The crash of 1987 is considered one of the most significant events in the history of financial markets due to the severity and swiftness of market declines worldwide. In the aftermath of the crash, a permanent change in options market occurred; implied volatility skew started appearing in options markets worldwide. In this article, we argue that the emergence of the implied volatility skew can be understood as arising from increased investor awareness about the stock price process and its implications for delta hedging. Delta-hedging aims to eliminate the directional risk associated with price movements in the underlying asset. Before the crash, investors were unaware of the proposition that "a delta-hedged portfolio is risky". That is, they implicitly believed in the proposition that "a delta-hedged portfolio is risk-free". The crash caused "portfolio insurance delta-hedges" to fail spectacularly. The resulting visceral shock drove home the lesson that "a delta-hedged portfolio is risky", thus, increasing investor awareness. We show that this sudden realization that a delta-hedged portfolio is risky is sufficient to generate the implied volatility skew and is equivalent to replacing the risk-free rate with a higher rate in the European call option formula. It follows that investor awareness (beyond asymmetric information) is an important consideration that matters for financial market behavior.

    Citation: Hammad Siddiqi. Financial market disruption and investor awareness: the case of implied volatility skew[J]. Quantitative Finance and Economics, 2022, 6(3): 505-517. doi: 10.3934/QFE.2022021

    Related Papers:

    [1] Manal Alqhtani, Khaled M. Saad . Numerical solutions of space-fractional diffusion equations via the exponential decay kernel. AIMS Mathematics, 2022, 7(4): 6535-6549. doi: 10.3934/math.2022364
    [2] Shazia Sadiq, Mujeeb ur Rehman . Solution of fractional boundary value problems by ψ-shifted operational matrices. AIMS Mathematics, 2022, 7(4): 6669-6693. doi: 10.3934/math.2022372
    [3] Waleed Mohamed Abd-Elhameed, Youssri Hassan Youssri . Spectral tau solution of the linearized time-fractional KdV-Type equations. AIMS Mathematics, 2022, 7(8): 15138-15158. doi: 10.3934/math.2022830
    [4] Mariam Al-Mazmumy, Maryam Ahmed Alyami, Mona Alsulami, Asrar Saleh Alsulami, Saleh S. Redhwan . An Adomian decomposition method with some orthogonal polynomials to solve nonhomogeneous fractional differential equations (FDEs). AIMS Mathematics, 2024, 9(11): 30548-30571. doi: 10.3934/math.20241475
    [5] Sunyoung Bu . A collocation methods based on the quadratic quadrature technique for fractional differential equations. AIMS Mathematics, 2022, 7(1): 804-820. doi: 10.3934/math.2022048
    [6] Zahra Pirouzeh, Mohammad Hadi Noori Skandari, Kamele Nassiri Pirbazari, Stanford Shateyi . A pseudo-spectral approach for optimal control problems of variable-order fractional integro-differential equations. AIMS Mathematics, 2024, 9(9): 23692-23710. doi: 10.3934/math.20241151
    [7] Waleed Mohamed Abd-Elhameed, Omar Mazen Alqubori, Ahmed Gamal Atta . A collocation procedure for the numerical treatment of FitzHugh–Nagumo equation using a kind of Chebyshev polynomials. AIMS Mathematics, 2025, 10(1): 1201-1223. doi: 10.3934/math.2025057
    [8] Khalid K. Ali, Mohamed A. Abd El Salam, Mohamed S. Mohamed . Chebyshev fifth-kind series approximation for generalized space fractional partial differential equations. AIMS Mathematics, 2022, 7(5): 7759-7780. doi: 10.3934/math.2022436
    [9] Chang Phang, Abdulnasir Isah, Yoke Teng Toh . Poly-Genocchi polynomials and its applications. AIMS Mathematics, 2021, 6(8): 8221-8238. doi: 10.3934/math.2021476
    [10] K. Ali Khalid, Aiman Mukheimer, A. Younis Jihad, Mohamed A. Abd El Salam, Hassen Aydi . Spectral collocation approach with shifted Chebyshev sixth-kind series approximation for generalized space fractional partial differential equations. AIMS Mathematics, 2022, 7(5): 8622-8644. doi: 10.3934/math.2022482
  • The crash of 1987 is considered one of the most significant events in the history of financial markets due to the severity and swiftness of market declines worldwide. In the aftermath of the crash, a permanent change in options market occurred; implied volatility skew started appearing in options markets worldwide. In this article, we argue that the emergence of the implied volatility skew can be understood as arising from increased investor awareness about the stock price process and its implications for delta hedging. Delta-hedging aims to eliminate the directional risk associated with price movements in the underlying asset. Before the crash, investors were unaware of the proposition that "a delta-hedged portfolio is risky". That is, they implicitly believed in the proposition that "a delta-hedged portfolio is risk-free". The crash caused "portfolio insurance delta-hedges" to fail spectacularly. The resulting visceral shock drove home the lesson that "a delta-hedged portfolio is risky", thus, increasing investor awareness. We show that this sudden realization that a delta-hedged portfolio is risky is sufficient to generate the implied volatility skew and is equivalent to replacing the risk-free rate with a higher rate in the European call option formula. It follows that investor awareness (beyond asymmetric information) is an important consideration that matters for financial market behavior.



    A wide range of fields rely on Chebyshev polynomials (CPs). Some CPs are famously special polynomials of Jacobi polynomials (JPs). We can extract four kinds of CPs from JPs. They were employed in many applications; see [1,2,3,4]. However, others can be considered special types of generalized ultraspherical polynomials; see [5,6]. Some contributions introduced and utilized other specific kinds of generalized ultraspherical polynomials. In the sequence of papers [7,8,9,10], the authors utilized CPs of the fifth- and sixth-kinds to treat different types of differential equations (DEs). Furthermore, the eighth-kind CPs were utilized in [11,12] to solve other types of DEs.

    Several phenomena that arise in different applied sciences can be better understood by delving into fractional calculus, which studies the integration and derivatives for non-integer orders. When describing important phenomena, fractional differential equations (FDEs) are vital. There are many examples of FDEs applications; see, for instance, [13,14,15]. Because it is usually not feasible to find analytical solutions for these equations, numerical methods are relied upon. Several methods were utilized to tackle various types of FDEs. Here are some techniques used to treat several FDEs: the Adomian decomposition method [16], a finite difference scheme [17], generalized finite difference method [18], Gauss collocation method [19], the inverse Laplace transform [20], the residual power series method [21], multi-step methods [22], Haar wavelet in [23], matrix methods in [24,25,26], collocation methods in [27,28,29,30], Galerkin methods in [31,32,33], and neural networks method in [34].

    Among the essential FDEs are the Rayleigh-Stokes equations. The fractional Rayleigh-Stokes equation is a mathematical model for the motion of fluids with fractional derivatives. This equation is used in many areas of study, such as non-Newtonian fluids, viscoelastic fluids, and fluid dynamics. Many contributions were devoted to investigating the types of Rayleigh-Stokes from a theoretical and numerical perspective. Theoretically, one can consult [35,36,37]. Several numerical approaches were followed to solve these equations. In [38], the authors used a finite difference method for the fractional Rayleigh-Stokes equation (FRSE). In [39], a computational method for two-dimensional FRSE is followed. The authors of [40] used a finite volume element algorithm to treat a nonlinear FRSE. In [41], a numerical method is applied to handle a type of Rayleigh-Stokes problem. Discrete Hahn polynomials treated variable-order two-dimensional FRSE in [42]. The authors of [43] numerically solved the FRSE.

    The significance of spectral approaches in engineering and fluid dynamics has been better understood in recent years, and this trend is being further explored in the applied sciences [44,45,46]. In these techniques, approximations to integral and differential equations are assumed by expanding a variety of polynomials, which are frequently orthogonal. The three spectral techniques used most often are the collocation, tau, and Galerkin methods. The optimal spectral approach to solving the provided equation depends on the nature of the DE and the governing conditions that regulate it. The three spectral methods use distinct trial and test functions. In the Galerkin method, the test and trial functions are chosen so that each basis function member meets the given DE's underlying constraints; see [47,48]. The tau method is not limited to a specific set of basis functions like the Galerkin approach. This is why it solves many types of DEs; see [49]. Among the spectral methods, the collocation method is the most suitable; see, for example, [50,51].

    In his seminal papers [52,53], Shen explored a new idea to apply the Galerkin method. He selected orthogonal combinations of Legendre and first-kind CPs to solve the second- and fourth-order DEs. The Galerkin approach was used to discretize the problems with their governing conditions. To address the even-order DEs, the authors of [54] employed a generalizing combination to solve even-order DEs.

    This paper's main contribution and significance is the development of a new Galerkin approach for treating the FRSE. The suggested technique has the advantage that it yields accurate approximations by picking a small number of the retained modes of the selected Galerkin basis functions.

    The current paper has the following structure. Section 2 presents some preliminaries and essential relations. Section 3 describes a Galerkin approach for treating FRSE. A comprehensive study on the convergence analysis is studied in Section 4. Section 5 is devoted to presenting some illustrative examples to show the efficiency and applicability of our proposed method. Section 6 reports some conclusions.

    This section defines the fractional Caputo derivative and reviews some of its essential properties. Next, we gather significant characteristics of the second-kind CPs. This paper will use some orthogonal combinations of the second-kind CPs to solve the FRSE.

    Definition 2.1. In Caputo's sense, the fractional-order derivative of the function ξ(s) is defined as [55]

    Dαξ(s)=1Γ(pα)s0(st)pα1ξ(p)(t)dt,α>0,s>0, p1<α<p,pN. (2.1)

    For Dα with p1<α<p,pN, the following identities are valid:

    DαC=0,C is a constant, (2.2)
    Dαsp={0,if pN0andp<α,p!Γ(pα+1)spα,if pN0andpα, (2.3)

    where N={1,2,...} and N0={0,1,2,}, and α is the ceiling function.

    The shifted second-kind CPs Uj(t) are orthogonal regarding the weight function ω(t)=t(τt) in the interval [0,τ] and defined as [56,57]

    Uj(t)=jr=0λr,jtr,j0, (2.4)

    where

    λr,j=22r(1)j+r(j+r+1)!τr(2r+1)!(jr)!, (2.5)

    with the following orthogonality relation [56]:

    τ0ω(t)Um(t)Un(t)dt=qm,n, (2.6)

    where

    qm,n=πτ28{1,if  m=n,0,if  mn. (2.7)

    {Um(t)}m0 can be generated by the recursive formula:

    Um(t)=2(2tτ1)Um1(t)Um2(t),U0(t)=1,U1(t)=2tτ1,m2. (2.8)

    The following theorem that presents the derivatives of Um(t) is helpful in what follows.

    Theorem 2.1. [56] For all jn, the following formula is valid:

    DnUj(t)=(4τ)njnp=0 (p+j+n)even(p+1)(n)12(jnp)(12(jnp))!(12(j+n+p+2))1nUp(t). (2.9)

    The following particular formulas of (2.9) give expressions for the first- and second-order derivatives.

    Corollary 2.1. The following derivative formulas are valid:

    DUj(t)=4τj1p=0 (p+j)odd(p+1)Up(t),j1, (2.10)
    D2Uj(t)=4τ2j2p=0 (p+j)even(p+1)(jp)(j+p+2)Up(t),j2. (2.11)

    Proof. Special cases of Theorem 2.1.

    This section is devoted to analyzing a Galerkin approach to solve the following FRSE [38,58]:

    vt(x,t)Dαt[avxx(x,t)]bvxx(x,t)=S(x,t),0<α<1, (3.1)

    governed by the following constraints:

    v(x,0)=v0(x),0<x<, (3.2)
    v(0,t)=v1(t),v(,t)=v2(t),0<tτ, (3.3)

    where a and b are two positive constants and S(x,t) is a known smooth function.

    Remark 3.1. The well-posedness and regularity of the fractional Rayleigh-Stokes problem are discussed in detail in [36].

    We choose the trial functions to be

    φi(x)=x(x)Ui(x). (3.4)

    Due to (2.6), it can be seen that {φi(x)}i0 satisfies the following orthogonality relation:

    0ˆω(x)φi(x)φj(x)dx=ai,j, (3.5)

    where

    ai,j=π28{1,if  i=j,0,if  ij, (3.6)

    and ˆω(x)=1x32(x)32.

    Theorem 3.1. The second-derivative of φi(x) can be expressed explicitly in terms of Uj(x) as

    d2φi(x)dx2=ij=0μj,iUj(x), (3.7)

    where

    μj,i=2{j+1,ifi>j, and(i+j)even,12(i+1)(i+2),ifi=j,0,otherwise. (3.8)

    Proof. Based on the basis functions in (3.4), we can write

    d2φi(x)dx2=2Ui(x)+2(2x)dUi(x)dx+(xx2)d2Ui(x)dx2. (3.9)

    Using Corollary 2.1, Eq (3.9) may be rewritten as

    d2φi(x)dx2=2Ui(x)+8j1p=0 (p+j)odd(p+1)Up(x)16j1p=0 (p+j)odd(p+1)xUp(x)+4j2p=0 (p+j)even(p+1)(jp)(j+p+2)xUp(x)(2)2j2p=0 (p+j)even(p+1)(jp)(j+p+2)x2Up(x). (3.10)

    With the aid of the recurrence relation (2.8), the following recurrence relation for Ui(x) holds:

    xUi(x)=4[Ui+1(x)+2Ui(x)+Ui1(x)]. (3.11)

    Moreover, the last relation enables us to write the following relation:

    x2Ui(x)=24[4Ui+1(x)+6Ui(x)+4Ui1(x)+Ui2(x)+Ui+2(x)]. (3.12)

    If we insert relations (3.11) and (3.12) into relation (3.10), and perform some computations, then we get

    d2φi(x)dx2=ij=0μj,iUj(x), (3.13)

    where

    μj,i=2{j+1,if  i>j, and (i+j)even,12(i+1)(i+2),if i=j,0,otherwise. (3.14)

    This completes the proof.

    Consider the FRSE (3.1), governed by the conditions: v(0,t)=v(,t)=0.

    Now, consider the following spaces:

    PM(Ω)=span{φi(x)Uj(t):i,j=0,1,,M},XM(Ω)={v(x,t)PM(Ω):v(0,t)=v(,t)=0}, (3.15)

    where Ω=(0,)×(0,τ].

    The approximate solution ˆv(x,t)XM(Ω) may be expressed as

    ˆv(x,t)=Mi=0Mj=0cijφi(x)Uj(t)=φCUT, (3.16)

    where

    φ=[φ0(x),φ1(x),,φM(x)],
    U=[U0(t),U1(t),,UM(t)],

    and C=(cij)0i,jM is the unknown matrix to be determined whose order is (M+1)×(M+1).

    The residual R(x,t) of Eq (3.1) may be calculated to give

    R(x,t)=ˆvt(x,t)Dαt[aˆvxx(x,t)]bˆvxx(x,t)S(x,t). (3.17)

    The philosophy in applying the Galerkin method is to find ˆv(x,t)XM(Ω), such that

    (R(x,t),φr(x)Us(t))ˉω(x,t)=0,0rM,0sM1, (3.18)

    where ˉω(x,t)=ˆω(x)ω(t). The last equation may be rewritten as

    Mi=0Mj=0cij(φi(x),φr(x))ˆω(x)(dUj(t)dt,Us(t))ω(t)aMi=0Mj=0cij(d2φi(x)dx2,φr(x))ˆω(x)(DαtUj(t),Us(t))ω(t)bMi=0Mj=0cij(d2φi(x)dx2,φr(x))ˆω(x)(Uj(t),Us(t))ω(t)=(S(x,t),φr(x)Us(t))ˉω(x,t). (3.19)

    In matrix form, Eq (3.19) can be written as

    ATCBaHTCKbHTCQ=G, (3.20)

    where

    G=(gr,s)(M+1)×M,grs=(S(x,t),φr(x)Us(t))ˉω(x,t), (3.21)
    A=(ai,r)(M+1)×(M+1),ai,r=(φi(x),φr(x))ˆω(x), (3.22)
    B=(bj,s)(M+1)×M,bj,s=(dUj(t)dt,Us(t))ω(t), (3.23)
    H=(hir)(M+1)×(M+1),hi,r=(d2φi(x)dx2,φr(x))ˆω(x), (3.24)
    K=(kj,s)(M+1)×M,kj,s=(DαtUj(t),Us(t))ω(t), (3.25)
    Q=(qj,s)(M+1)×M,qj,s=(Uj(t),Us(t))ω(t). (3.26)

    Moreover, (3.2) implies that

    Mi=0Mj=0cijai,rUj(0)=(v(x,0),φr(x))ˆω(x),0rM. (3.27)

    Now, Eq (3.20) along with (3.27) constitutes a system of algebraic equations of order (M+1)2, that may be solved using a suitable numerical procedure.

    Now, the derivation of the formulas of the entries of the matrices A, B, H, K and Q are given in the following theorem.

    Theorem 3.2. The following definite integral formulas are valid:

    (a)0ˆω(x)φi(x)φr(x)dx=ai,r,(b)0ˆω(x)d2φi(x)dx2φr(x)dx=hi,r,(c)τ0ω(t)Uj(t)Us(t)dt=qj,s,(d)τ0ω(t)dUj(t)dtUs(t)dt=bj,s,(e)τ0ω(t)[DαtUj(t)]Us(t)dt=kj,s, (3.28)

    where qj,s and ai,r are given respectively in Eqs (2.6) and (3.6). Also, we have

    hi,r=ij=0μj,iγj,r, (3.29)
    bj,s=πτ2j1p=0 (p+j+1)even(p+1)δp,s, (3.30)
    γj,r={π(r+1),ifjr and(r+j)even,π(j+1),ifj<r and(r+j)even,0,otherwise, (3.31)
    δp,s={1,ifp=s,0,ifps, (3.32)

    and

    kj,s=jk=1π4k1(s+1)k!τ2α(1)j+k+s(j+k+1)!Γ(kα+32)(2k+1)!(jk)!Γ(kα+1)× 3˜F2(s,s+2,α+k+3232,α+k+3|1), (3.33)

    where  3˜F2 is the regularized hypergeometric function [59].

    Proof. To find the elements hi,r: Using Theorem 3.1, one has

    hi,r=0ˆω(x)d2φi(x)dx2φr(x)dx=ij=0μj,i0ˆω(x)Uj(x)φr(x)dx. (3.34)

    Now, 0ˆω(x)Uj(x)φr(x)dx can be calculated to give the following result:

    0ˆω(x)Uj(x)φr(x)dx=γj,r, (3.35)

    and therefore, we get the following desired result:

    hi,r=ij=0μj,iγj,r. (3.36)

    To find the elements bj,s: Formula (2.10) along with the orthogonality relation (2.6) helps us to write

    bj,s=τ0ω(t)dUj(t)dtUs(t)dt=πτ2j1p=0 (p+j)odd(p+1)δp,s. (3.37)

    To find kj,s: Using property (2.3) together with (2.4), one can write

    kj,s=τ0ω(t)[DαtUj(t)]Us(t)dt=jk=122kk!(1)j+k(j+k+1)!(2k+1)!τk(jk)!Γ(kα+1)τ0Us(t)tkαω(t)dt=jk=122kk!(1)j+k(j+k+1)!(2k+1)!(jk)!(kα)!sn=0π22n1τ2α(1)n+sΓ(n+s+2)Γ(k+nα+32)(2n+1)!(sn)!Γ(k+nα+3). (3.38)

    If we note the following identity:

    sn=0π22n1τ2α(1)n+s(n+s+1)!Γ(k+nα+32)(2n+1)!(sn)!Γ(k+nα+3)=14π(1)s(s+1)τα+2 Γ(kα+32)3˜F2(s,s+2,α+k+3232,α+k+3|1), (3.39)

    then, we get

    kj,s=jk=1π4k1(s+1)k!τ2α(1)j+k+sΓ(j+k+2)Γ(kα+32)Γ(2k+2)(jk)!Γ(kα+1)× 3˜F2(s,s+2,α+k+3232,α+k+3|1). (3.40)

    Theorem 3.2 is now proved.

    Remark 3.2. The following algorithm shows our proposed Galerkin technique, which outlines the necessary steps to get the approximate solutions.

    Algorithm 1 Coding algorithm for the proposed technique
    Input a,b,,τ,α,v0(x), and S(x,t).
    Step 1. Assume an approximate solution ˆv(x,t) as in (3.16).
    Step 2. Apply Galerkin method to obtain the system in (3.20) and (3.27).
    Step 4. Use Theorem 3.2 to get the elements of matrices A,B,H,K and Q.
    Step 5. Use NDsolve command to solve the system in (3.20) and (3.27) to get cij.
    Output ˆv(x,t).

    Remark 3.3. Based on the following substitution:

    v(x,t):=y(x,t)+(1x)v(0,t)+xv(,t), (3.41)

    the FRSE (3.1) with non-homogeneous boundary conditions will convert to homogeneous ones y(0,t)=y(,t)=0.

    In this section, we study the error bound for the two cases corresponding to the 1-D and 2-D Chebyshev-weighted Sobolev spaces.

    Assume the following Chebyshev-weighted Sobolev spaces:

    Hα,mω(t)(I1)={u:Dα+ktuL2ω(t)(I1),0km}, (4.1)
    Ymˆω(x)(I2)={u:u(0)=u()=0 and DkxuL2ˆω(x)(I2),0km}, (4.2)

    where I1=(0,τ) and I2=(0,) are quipped with the inner product, norm, and semi-norm

    (u,v)Hα,mω(t)=mk=0(Dα+ktu,Dα+ktv)L2ω(t),||u||2Hα,mω(t)=(u,u)Hα,mω(t),|u|Hα,mω(t)=||Dα+mtu||L2ω(t),(u,v)Ymˆω(x)=mk=0(Dkxu,Dkxv)L2ˆω(x),||u||2Ymˆω(x)=(u,u)Ymˆω(x),|u|Ymˆω(x)=||Dmxu||L2ˆω(x), (4.3)

    where 0<α<1 and mN.

    Also, assume the following two-dimensional Chebyshev-weighted Sobolev space:

    Hr,sˉω(x,t)(Ω)={u:u(0,t)=u(,t)=0 and α+p+quxptα+qL2ˉω(x,t)(Ω),rp0,sq0}, (4.4)

    equipped with the norm and semi-norm

    ||u||Hr,sˉω(x,t)=(rp=0sq=0||α+p+quxptα+q||2L2ˉω(x,t))12,|u|Hr,sˉω(x,t)=||α+r+suxrtα+s||L2ˉω(x,t), (4.5)

    where 0<α<1 and r,sN.

    Lemma 4.1. [60] For nN, n+r>1, and n+s>1, where r,sR are any constants, we have

    Γ(n+r)Γ(n+s)or,snnrs, (4.6)

    where

    or,sn=exp(rs2(n+s1)+112(n+r1)+(rs)2n). (4.7)

    Theorem 4.1. Suppose 0<α<1, and ˉη(t)=Mj=0ˆηjUj(t) is the approximate solution of η(t)Hα,mω(t)(I1). Then, for 0kmM+1, we get

    ||Dα+kt(η(t)ˆη(t))||L2ω(t)τmkM54(mk)|η(t)|2Hα,mω(t), (4.8)

    where AB indicates the existence of a constant ν such that AνB.

    Proof. The definitions of η(t) and ˆη(t) allow us to have

    ||Dα+kt(η(t)ˆη(t))||2L2ω(t)=n=M+1|ˆηn|2||Dα+ktUn(t)||2L2ω(t)=n=M+1|ˆηn|2||Dα+ktUn(t)||2L2ω(t)||Dα+mtUn(t)||2L2ω(t)||Dα+mtUn(t)||2L2ω(t)||Dα+ktUM+1(t)||2L2ω(t)||Dα+mtUM+1(t)||2L2ω(t)|η(t)|2Hα,mω(t). (4.9)

    To estimate the factor ||Dα+ktUM+1(t)||2L2ω(t)||Dα+mtUM+1(t)||2L2ω(t), we first find ||Dα+ktUM+1(t)||2L2ω(t).

    ||Dα+ktUM+1(t)||2L2ω(t)=τ0Dα+ktUM+1(t)Dα+ktUM+1(t)ω(t)dt. (4.10)

    Equation (2.3) along with (2.4) allows us to write

    Dα+ktUM+1(t)=M+1r=k+1λr,M+1r!Γ(rkα+1)trkα, (4.11)

    and accordingly, we have

    ||Dα+ktUM+1(t)||2L2ω(t)=M+1r=k+1λ2r,M+1(r!)2Γ2(rkα+1)τ0t2(rkα)+12(τt)12dt=M+1r=k+1λ2r,M+1τ2(rkα+1)π(r!)2Γ(2(rkα)+32)2Γ2(rkα+1)Γ(2(rkα)+3). (4.12)

    The following inequality can be obtained after applying the Stirling formula [44]:

    Γ2(r+1)Γ(2(rkα)+32)Γ2(rkα+1)Γ(2(rkα)+3)r2(k+α)(rk)32. (4.13)

    By virtue of the Stirling formula [44] and Lemma 4.1, ||Dα+ktUM+1(t)||2L2ω(t) can be written as

    ||Dα+ktUM+1(t)||2L2ω(t)λτ2(Mkα+2)(M+1)2(k+α)(Mk+1)32M+1r=k+11=λτ2(Mkα+2)(M+1)2(k+α)(Mk+1)12=λτ2(Mkα+2)(Γ(M+2)Γ(M+1))2(k+α)(Γ(Mk+2)Γ(Mk+1))12τ2(Mkα+2)M2(k+α)(Mk)12, (4.14)

    where λ=max.

    Similarly, we have

    \begin{equation} ||D_{t}^{\alpha+m}\,\mathbf{U}^{*}_{\mathcal{M}+1}(t)||^{2}_{L^{2}_{\omega(t)}}\lesssim \tau^{2\,(\mathcal{M}-m-\alpha+2)}\, \mathcal{M}^{2\,(m+\alpha)}\,(\mathcal{M}-m)^{-\frac{1}{2}}, \end{equation} (4.15)

    and accordingly, we have

    \begin{equation} \begin{split} \frac{||D_{t}^{\alpha+k}\,\mathbf{U}^{*}_{\mathcal{M}+1}(t)||^{2}_{L^{2}_{\omega(t)}}}{||D_{t}^{\alpha+m}\,\mathbf{U}^{*}_{\mathcal{M}+1}(t)||_{L^{2}_{\omega(t)}}} &\lesssim \tau^{2\,(m-k)}\,\mathcal{M}^{2\,(k-m)}\,\left(\frac{\mathcal{M}-k}{\mathcal{M}-m}\right)^{-\frac{1}{2}}\\& = \tau^{2\,(m-k)}\,\mathcal{M}^{-2\,(m-k)}\,\left(\frac{\Gamma(\mathcal{M}-k+1)}{\Gamma(\mathcal{M}-m+1)}\right)^{-\frac{1}{2}}\\& \lesssim \tau^{2\,(m-k)}\,\mathcal{M}^{\frac{-5}{2}(m-k)}. \end{split} \end{equation} (4.16)

    Inserting Eq (4.16) into Eq (4.9), one gets

    \begin{equation} ||D_{t}^{\alpha+k}\,(\eta(t)-\hat{\eta}(t))||^{2}_{L^{2}_\omega(t)}\lesssim \tau^{2\,(m-k)}\,\mathcal{M}^{\frac{-5}{2}(m-k)}\,|\eta(t)|^{2}_{\mathbf{H}^{\alpha,m}_{\omega(t)}}. \end{equation} (4.17)

    Therefore, we get the desired result.

    Theorem 4.2. Suppose \bar{\zeta}(x) = \sum_{i = 0}^{\mathcal{M}} \hat{\zeta}_{i}\, \varphi_i(x), is the approximate solution of \zeta_{\mathcal{M}}(x)\in\mathbf{Y}^{m}_{\hat{\omega}(x)}(I_{2}). Then, for 0\leq k\leq m\leq \mathcal{M}+1, we get

    \begin{equation} ||D_{x}^{k}\,(\zeta(x)-\bar{\zeta}(x))||_{L^{2}_{\hat{\omega}(x)}}\lesssim \ell^{m-k}\,\mathcal{M}^{\frac{-1}{4}\,(m-k)}\,|\zeta(x)|^{2}_{\mathbf{Y}^{m}_{\hat{\omega}(x)}}. \end{equation} (4.18)

    Proof. At first, based on the definitions of \zeta(x) and \bar{\zeta}(x), one has

    \begin{equation} \begin{split} ||D_{x}^{k}\,(\zeta(x)-\bar{\zeta}(x)||^{2}_{L^{2}_{\hat{\omega}(x)}}& = \sum\limits_{n = \mathcal{M}+1}^{\infty} |\hat{\zeta}_{n}|^{2}\,||D_{x}^{k}\,\varphi_n(x)||^{2}_{L^{2}_{\hat{\omega}(x)}}\\& = \sum\limits_{n = \mathcal{M}+1}^{\infty} |\hat{\zeta}_{n}|^{2}\,\frac{||D_{x}^{k}\,\varphi_{n}(x)||^{2}_{L^{2}_{\hat{\omega}(x)}}}{||D_{x}^{m}\,\varphi_{n}(x)||^{2}_{L^{2}_{\hat{\omega}(x)}}}\,||D_{x}^{m}\,\varphi_{n}(x)||^{2}_{L^{2}_{\hat{\omega}(x)}}\\& \leq \frac{||D_{x}^{k}\,\varphi_{\mathcal{M}+1}(x)||^{2}_{L^{2}_{\hat{\omega}(x)}}}{||D_{x}^{m}\,\varphi_{\mathcal{M}+1}(x)||^{2}_{L^{2}_{\hat{\omega}(x)}}}\,|\zeta(x)|^{2}_{\mathbf{Y}^{m}_{\hat{\omega}(x)}}. \end{split} \end{equation} (4.19)

    Now, we have

    \begin{equation} \begin{split} D_{x}^{k}\,\varphi_{\mathcal{M}+1}(x) = \sum _{r = k}^{\mathcal{M}+1} \ell\,\lambda_{r,\mathcal{M}+1} \frac{\Gamma(r+2)}{\Gamma(r-k+2)}\,x^{r-k+1}-\sum _{r = k}^{\mathcal{M}+1} \lambda_{r,\mathcal{M}+1} \frac{\Gamma(r+3)}{\Gamma(r-k+3)}\,x^{r-k+2}, \end{split} \end{equation} (4.20)

    and therefore, ||D_{x}^{k}\, \varphi_{\mathcal{M}+1}(x)||^{2}_{L^{2}_{\hat{\omega}(x)}} can be written as

    \begin{equation} \begin{split} ||D_{x}^{k}\,\varphi_{\mathcal{M}+1}(x)||^{2}_{L^{2}_{\hat{\omega}(x)}} = & -\sum _{r = k}^{\mathcal{M}+1} \ell^{2(r-k)}\,\lambda^{2}_{r,\mathcal{M}+1} \frac{2\,\sqrt{\pi}\,\Gamma^{2}(r+2)\,\Gamma(2(r-k+1)-\frac{1}{2})}{\Gamma^{2}(r-k+2)\,\Gamma(2(r-k+1)-1)}\\& +\sum _{r = k}^{\mathcal{M}+1} \ell^{2(r-k+1)}\,\lambda^{2}_{r,\mathcal{M}+1} \frac{2\,\sqrt{\pi}\,\Gamma^{2}(r+3)\,\Gamma(2(r-k+2)-\frac{1}{2})}{\Gamma^{2}(r-k+3)\,\Gamma(2(r-k+2)-1)}. \end{split} \end{equation} (4.21)

    The application of the Stirling formula [44] leads to

    \begin{equation} \begin{split} &\frac{\Gamma^{2}(r+2)\,\Gamma(2(r-k+1)-\frac{1}{2})}{\Gamma^{2}(r-k+2)\,\Gamma(2(r-k+1)-1)} \lesssim r^{2\,k}\,(r-k)^{\frac{1}{2}},\\& \frac{\Gamma^{2}(r+3)\,\Gamma(2(r-k+2)-\frac{1}{2})}{\Gamma^{2}(r-k+3)\,\Gamma(2(r-k+2)-1)}\lesssim r^{2\,k}\,(r-k)^{\frac{1}{2}}, \end{split} \end{equation} (4.22)

    and hence, we get

    \begin{equation} ||D_{x}^{k}\,\varphi_{\mathcal{M}+1}(x)||^{2}_{L^{2}_{\hat{\omega}(x)}}\lesssim \ell^{2(r-k+1)}\,\mathcal{M}^{2\,k}\,(\mathcal{M}-k)^{\frac{3}{2}}. \end{equation} (4.23)

    Finally, we get the following estimation:

    \begin{equation} \begin{split} \frac{||D_{x}^{k}\,\varphi_{\mathcal{M}+1}(x)||^{2}_{L^{2}_{\hat{\omega}(x)}}}{||D_{x}^{m}\,\varphi_{\mathcal{M}+1}(x)||^{2}_{L^{2}_{\hat{\omega}(x)}}} &\lesssim \ell^{2\,(m-k)}\,\mathcal{M}^{\frac{-1}{2}\,(m-k)}. \end{split} \end{equation} (4.24)

    At the end, we get

    \begin{equation} ||D_{x}^{k}\,(\zeta(x)-\bar{\zeta}(x))||_{L^{2}_{\hat{\omega}(x)}}\lesssim \ell^{m-k}\,\mathcal{M}^{\frac{-1}{4}\,(m-k)}\,|\zeta(x)|^{2}_{\mathbf{Y}^{m}_{\hat{\omega}(x)}}. \end{equation} (4.25)

    Theorem 4.3. Given the following assumptions: \alpha = 0, 0\leq p\leq r\leq \mathcal{M}+1, and the approximation to v(x, t)\in{\mathbf{H}^{r, s}_{\ddot{\omega}}}(\Omega) is \hat{v}(x, t) . As a result, the estimation that follows is applicable:

    \begin{equation} \left|\left|\frac{\partial^{p}}{\partial\,x^{p}}\,(v(x,t)-\hat{v}(x,t))\right|\right|_{L^{2}_{\bar{\omega}(x,t)}}\lesssim \ell^{r-p}\,\mathcal{M}^{\frac{-1}{4}\,(r-p)}\,|v(x,t)|_{\mathbf{H}^{r,0}_{\bar{\omega}(x,t)}}. \end{equation} (4.26)

    Proof. According to the definitions of v(x, t) and \hat{v}(x, t), one has

    \begin{equation} \begin{split} v(x,t)-\hat{v}(x,t)& = \sum\limits_{i = 0}^{\mathcal{M}}\sum\limits_{j = \mathcal{M}+1}^{\infty}c_{ij}\,\varphi_i(x)\,\mathbf{U}^{*}_{j}(t)+\sum\limits_{i = \mathcal{M}+1}^{\infty}\sum\limits_{j = 0}^{\infty}c_{ij}\,\varphi_i(x)\,\mathbf{U}^{*}_{j}(t)\\& \leq\sum\limits_{i = 0}^{\mathcal{M}}\sum\limits_{j = 0}^{\infty}c_{ij}\,\varphi_i(x)\,\mathbf{U}^{*}_{j}(t)+\sum\limits_{i = \mathcal{M}+1}^{\infty}\sum\limits_{j = 0}^{\infty}c_{ij}\,\varphi_i(x)\,\mathbf{U}^{*}_{j}(t). \end{split} \end{equation} (4.27)

    Now, applying the same procedures as in Theorem 4.2, we obtain

    \begin{equation} \left|\left|\frac{\partial^{p}}{\partial\,x^{p}}\,(v(x,t)-\hat{v}(x,t))\right|\right|_{L^{2}_{\bar{\omega}(x,t)}}\lesssim \ell^{r-p}\,\mathcal{M}^{\frac{-1}{4}\,(r-p)}\,|v(x,t)|_{\mathbf{H}^{r,0}_{\bar{\omega}(x,t)}}. \end{equation} (4.28)

    Theorem 4.4. Given the following assumptions: \alpha = 0, 0\leq q\leq s\leq \mathcal{M}+1, and the approximation to v(x, t)\in{\mathbf{H}^{r, s}_{\ddot{\omega}}}(\Omega) is \hat{v}(x, t) . As a result, the estimation that follows is applicable:

    \begin{equation} \left|\left|\frac{\partial^{q}}{\partial\,t^{q}}\,(v(x,t)-\hat{v}(x,t))\right|\right|_{L^{2}_{\bar{\omega}(x,t)}}\lesssim \tau^{s-q}\,\mathcal{M}^{\frac{-5}{4}\,(s-q)}\,|v(x,t)|_{\mathbf{H}^{0,s}_{\bar{\omega}(x,t)}}. \end{equation} (4.29)

    Theorem 4.5. Let \hat{v}(x, t) be the approximate solution of v(x, t)\in{\mathbf{H}^{r, s}_{\bar{\omega}(x, t)}}(\Omega) , and assume that 0 < \alpha < 1 . Consequently, for 0\leq p\leq r\leq \mathcal{M}+1, and 0\leq q\leq s\leq \mathcal{M}+1, we obtain

    \begin{equation} \left|\left|\frac{\partial^{\alpha+q}}{\partial\,t^{\alpha+q}}\,\left[\frac{\partial^{p}}{\partial\,x^{p}}(v(x,t)-\hat{v}(x,t))\right]\right|\right|_{L^{2}_{\bar{\omega}(x,t)}}\lesssim \tau^{s-q}\,\ell^{r-p}\,\mathcal{M}^{\frac{-1}{4}\,[5\,(s-q)+r-p)]}\,|v(x,t)|_{\mathbf{H}^{r,0}_{\bar{\omega}(x,t)}}. \end{equation} (4.30)

    Proof. The proofs of Theorems 4.4 and 4.5 are similar to the proof of Theorem 4.3.

    Theorem 4.6. Let \mathbf{\mathcal{R}}(x, t) be the residual of Eq (3.1), then \left|\left|\mathbf{\mathcal{R}}(x, t)\right|\right|_{L^{2}_{\bar{\omega}(x, t)}}\rightarrow 0 as \mathcal{M}\rightarrow \infty .

    Proof. \left|\left|\mathbf{\mathcal{R}}(x, t)\right|\right|_{L^{2}_{\bar{\omega}(x, t)}} of Eq (3.28) can be written as

    \begin{equation} \begin{split} \left|\left|\mathbf{\mathcal{R}}(x,t)\right|\right|_{L^{2}_{\bar{\omega}(x,t)}} = &\left|\left|\hat{v}_{t}(x,t)-{D_{t}^{\alpha}}\,[\,a\,\hat{v}_{xx}(x,t)\,]-b\,\hat{v}_{xx}(x,t)-\mathcal{S}(x,t)\right|\right|_{L^{2}_{\bar{\omega}(x,t)}}\\ \leq& \left|\left|\frac{\partial}{\partial\,t}\,(v(x,t)-\hat{v}(x,t))\right|\right|_{L^{2}_{\bar{\omega}(x,t)}} -a\,\left|\left|\frac{\partial^{\alpha}}{\partial\,t^{\alpha}}\,\left[\frac{\partial^{2}}{\partial\,x^{2}}(v(x,t)-\hat{v}(x,t))\right]\right|\right|_{L^{2}_{\bar{\omega}(x,t)}}\\& -b\,\left|\left|\frac{\partial^{2}}{\partial\,x^{2}}\,(v(x,t)-\hat{v}(x,t))\right|\right|_{L^{2}_{\bar{\omega}(x,t)}}. \end{split} \end{equation} (4.31)

    Now, the application of Theorems 4.3–4.5 leads to

    \begin{equation} \begin{split} \left|\left|\mathbf{\mathcal{R}}(x,t)\right|\right|_{L^{2}_{\bar{\omega}(x,t)}}\lesssim& \tau^{s-1}\,\mathcal{M}^{\frac{-5}{4}\,(s-1)}\,|v(x,t)|_{\mathbf{H}^{0,s}_{\bar{\omega}(x,t)}}-a\, \tau^{s}\,\ell^{r-2}\,\mathcal{M}^{\frac{-1}{4}\,[5\,s+r-2)]}\,|v(x,t)|_{\mathbf{H}^{r,0}_{\bar{\omega}(x,t)}}\\&-b\,\ell^{r-2}\,\mathcal{M}^{\frac{-1}{4}\,(r-2)}\,|v(x,t)|_{\mathbf{H}^{r,0}_{\bar{\omega}(x,t)}}. \end{split} \end{equation} (4.32)

    Therefore, it is clear that \left|\left|\mathbf{\mathcal{R}}(x, t)\right|\right|_{L^{2}_{\bar{\omega}(x, t)}}\rightarrow 0 as \mathcal{M}\rightarrow \infty.

    This section will compare our shifted second-kind Galerkin method (SSKGM) with other methods. Three test problems will be presented in this regard.

    Example 5.1. [38] Consider the following equation:

    \begin{equation} v_{t}(x,t)-{D_{t}^{\alpha}}\,[\,v_{xx}(x,t)\,]-v_{x}(x,t) = \mathcal{S}(x,t), \quad 0 < \alpha < 1, \end{equation} (5.1)

    where

    \begin{equation} \mathcal{S}(x,t) = 2\, t\, x\, (x-\ell ) \left[\left(5\, x^2-5\, x\, \ell +\ell ^2\right) \left(\frac{6 }{\Gamma (3-\alpha )}\,t^{1-\alpha }+3\, t\right)-x^2\, (x-\ell )^2\right], \end{equation} (5.2)

    governed by (3.2) and (3.3). Problem (5.1) has the exact solution: u(x, t) = x^3\, (\ell -x)^3\, t^2.

    In Table 1, we compare the L_{2} errors of the SSKGM with that obtained in [38] at \ell = \tau = 1 . Table 2 reports the amount of time for which a central processing unit (CPU) was used for obtaining results in Table 1. These tables show the high accuracy of our method. Figure 1 illustrates the absolute errors (AEs) at different values of \alpha at \mathcal{M} = 4 when \ell = \tau = 1 . Figure 2 illustrates the AEs at different values of \alpha at \mathcal{M} = 4 when \ell = 3, and \tau = 2 . Figure 3 shows the AEs at different \alpha at \mathcal{M} = 4 when \ell = 10, and \tau = 5 .

    Table 1.  Comparison of the L_{2} errors for Example 5.1.
    Our method Method in [38]
    \alpha \mathcal{M}=4 h= \frac{1}{5000}, T= \frac{1}{128} T= \frac{1}{5000}, h= \frac{1}{128}
    0.1 1.22946\times10^{-16} 1.1552\times10^{-6} 1.4408\times10^{-6}
    0.5 2.40485\times10^{-16} 1.0805\times10^{-6} 1.4007\times10^{-6}
    0.9 8.83875\times10^{-17} 8.1511\times10^{-7} 1.3682\times10^{-6}

     | Show Table
    DownLoad: CSV
    Table 2.  CPU time used for Table 1.
    CPU time of our method CPU time of method in [38]
    \alpha \mathcal{M}=4 h= \frac{1}{5000}, T= \frac{1}{128} T= \frac{1}{5000}, h= \frac{1}{128}
    0.1 30.891 16.828 67.243
    0.5 35.953 16.733 67.470
    0.9 31.078 16.672 67.006

     | Show Table
    DownLoad: CSV
    Figure 1.  The AEs at different values of \alpha for Example 5.1.
    Figure 2.  The AEs at different values of \alpha for Example 5.1.
    Figure 3.  The AEs at different values of \alpha for Example 5.1.

    Example 5.2. [38] Consider the following equation:

    \begin{equation} v_{t}(x,t)-{D_{t}^{\alpha}}\,[\,v_{x}(x,t)\,]-v_{x}(x,t) = \mathcal{S}(x,t) , \quad 0 < \alpha < 1, \end{equation} (5.3)

    where

    \mathcal{S}(x,t) = u^2+\sin (\pi\, x) \left[\frac{2 \,\pi ^2 }{\Gamma (3-\alpha )}\,t^{2-\alpha }+\pi ^2\, t^2+2\, t\right]-t^4 \sin ^2(\pi \, x),

    governed by (3.2) and (3.3). Problem (5.3) has the exact solution: u(x, t) = t^2\, \sin (\pi \, x).

    Table 3 compares the L_{2} errors of the SSKGM with those obtained by the method in [38] at \ell = \tau = 1 . This table shows that our results are more accurate. Table 4 reports the CPU time used for obtaining results in Table 3. Moreover, Figure 4 sketches the AEs at different values \mathcal{M} when \alpha = 0.7 , and \ell = \tau = 1 . Table 5 presents the maximum AEs at \alpha = 0.8 and \mathcal{M} = 8 when \ell = \tau = 1 . Figure 5 sketches the AEs at different \alpha for \mathcal{M} = 10, \ell = 3 and \tau = 1 .

    Table 3.  Comparison of the L_{2} errors for Example 5.2.
    Our method Method in [38]
    \alpha \mathcal{M}=8 h= \frac{1}{5000}, T= \frac{1}{128} T= \frac{1}{5000}, h= \frac{1}{128}
    0.1 4.97952\times10^{-10} 9.1909\times10^{-5} 5.1027\times10^{-5}
    0.5 5.85998\times10^{-10} 8.4317\times10^{-5} 4.4651\times10^{-5}
    0.9 4.62473\times10^{-10} 6.2864\times10^{-5} 4.0543\times10^{-5}

     | Show Table
    DownLoad: CSV
    Table 4.  CPU time used for Table 3.
    CPU time of our method CPU time of method in [38]
    \alpha \mathcal{M}=8 h= \frac{1}{5000}, T= \frac{1}{128} T= \frac{1}{5000}, h= \frac{1}{128}
    0.1 119.061 20.095 70.952
    0.5 118.001 19.991 71.117
    0.9 121.36 19.908 71.153

     | Show Table
    DownLoad: CSV
    Figure 4.  The AEs at different values of \mathcal{M} when \alpha = 0.7 for Example 5.2.
    Table 5.  The maximum AEs of Example 5.2 at \alpha = 0.8, \, \mathcal{M} = 8 .
    x t=0.2 t=0.4 t=0.6 t=0.8
    0.1 7.82271\times10^{-12} 9.69765\times10^{-11} 1.48667\times10^{-10} 2.65085\times10^{-10}
    0.2 4.24386\times10^{-11} 6.82703\times10^{-11} 2.34553\times10^{-10} 5.1182\times10^{-10}
    0.3 4.48637\times10^{-11} 6.28317\times10^{-11} 2.69028\times10^{-10} 4.28024\times10^{-10}
    0.4 2.57132\times10^{-11} 5.02944\times10^{-11} 1.45928\times10^{-10} 3.26067\times10^{-10}
    0.5 6.59974\times10^{-11} 1.22092\times10^{-11} 4.38141\times10^{-10} 7.16844\times10^{-10}
    0.6 1.11684\times10^{-11} 1.04731\times10^{-10} 1.78963\times10^{-10} 2.80989\times10^{-10}
    0.7 4.19906\times10^{-11} 7.33454\times10^{-11} 2.70484\times10^{-10} 4.25895\times10^{-10}
    0.8 2.97515\times10^{-11} 1.16961\times10^{-10} 2.70621\times10^{-10} 4.63116\times10^{-10}
    0.9 1.0014\times10^{-11} 8.68311\times10^{-11} 1.43244\times10^{-10} 2.71797\times10^{-10}

     | Show Table
    DownLoad: CSV
    Figure 5.  The AEs at different values of \alpha when \mathcal{M} = 10 for Example 5.2.

    Example 5.3. Consider the following equation:

    \begin{equation} v_{t}(x,t)-{D_{t}^{\alpha}}\,[\,v_{x}(x,t)\,]-v_{x}(x,t) = \mathcal{S}(x,t) , \quad 0 < \alpha < 1, \end{equation} (5.4)

    where

    \mathcal{S}(x,t) = \sin (2\, \pi\, x) \left(\frac{4 \,\pi ^2\, \Gamma (5)}{\Gamma (5-\alpha )}\, t^{4-\alpha }+4\, \pi ^2 \,t^4+4\, t^3\right),

    governed by (3.2) and (3.3). The exact solution of this problem is: u(x, t) = t^4\, \sin (2\, \pi\, x).

    Table 6 presents the maximum AEs at \alpha = 0.5 and \mathcal{M} = 9 when \ell = \tau = 1 . Figure 6 sketches the AEs at different \mathcal{M} and \alpha = 0.9 when \ell = \tau = 1 .

    Table 6.  The maximum AEs of Example 5.3 at \alpha = 0.5, \, \mathcal{M} = 9 .
    x t=0.2 t=0.4 t=0.6 t=0.8
    0.1 4.36556\times10^{-8} 6.4783\times10^{-8} 4.6896\times10^{-8} 3.07364\times10^{-8}
    0.2 3.41326\times10^{-8} 5.29923\times10^{-8} 2.18292\times10^{-8} 6.69242\times10^{-8}
    0.3 4.74795\times10^{-8} 5.70081\times10^{-5} 1.14413\times10^{-7} 1.69409\times10^{-7}
    0.4 8.64528\times10^{-8} 1.11593\times10^{-7} 1.6302\times10^{-7} 1.69931\times10^{-7}
    0.5 2.03915\times10^{-8} 2.95873\times10^{-8} 2.78558\times10^{-8} 2.04027\times10^{-11}
    0.6 8.36608\times10^{-8} 1.07773\times10^{-7} 1.59124\times10^{-7} 1.70274\times10^{-7}
    0.7 1.53354\times10^{-8} 1.00892\times10^{-8} 7.07611\times10^{-8} 1.68702\times10^{-7}
    0.8 3.88458\times10^{-8} 6.00347\times10^{-8} 2.85788\times10^{-8} 6.72342\times10^{-8}
    0.9 3.00922\times10^{-8} 4.49878\times10^{-8} 2.82622\times10^{-8} 3.06797\times10^{-8}

     | Show Table
    DownLoad: CSV
    Figure 6.  The AEs at different values of \mathcal{M} when \alpha = 0.9 for Example 5.3.

    Example 5.4. [61] Consider the following equation:

    \begin{equation} v_{t}(x,t)-{D_{t}^{\alpha}}\,[\,v_{x}(x,t)\,]-v_{x}(x,t) = \mathcal{S}(x,t) , \quad 0 < \alpha < 1, \end{equation} (5.5)

    governed by the following constraints:

    \begin{align} &v(x,0) = 0, \quad 0 < x < 1, \end{align} (5.6)
    \begin{align} & v(0,t) = t^{\gamma +2}, \quad v(1,t) = e\, t^{\gamma +2}, \quad 0 < t\leq 1, \end{align} (5.7)

    where

    \mathcal{S}(x,t) = e^x\, \left(t^{\gamma +1} (\gamma -t+2)-\frac{\Gamma(\gamma +3) }{\Gamma (-\alpha +\gamma +3)}\,t^{-\alpha +\gamma +2}\right),

    and the exact solution of this problem is: u(x, t) = e^x\, t^{\gamma +2}. This problem is solved for the case \gamma = 1 . In Table 7, we compare the L_{2} errors of the SSKGM with that obtained in [61] at different values of \alpha . This table shows the high accuracy of our method. Figure 7 illustrates the AE (left) and the approximate solution (right) at \mathcal{M} = 7 when \alpha = 0.6 .

    Table 7.  Comparison of the L_{2} errors for Example 5.4.
    Our method Method in [61]
    \alpha \mathcal{M}=7 n=m=10
    0.1 1.39197\times10^{-11} 2.176\times10^{-9}
    0.3 1.34984\times10^{-10} 9.045\times10^{-9}
    0.5 5.87711\times10^{-11} 1.516\times10^{-8}
    0.7 8.43536\times10^{-12} 1.415\times10^{-8}
    0.9 7.46064\times10^{-12} 5.749\times10^{-9}

     | Show Table
    DownLoad: CSV
    Figure 7.  The AE (left) and the approximate solution (right) at \mathcal{M} = 7 when \alpha = 0.6 for Example 5.4.

    This study presented a Galerkin algorithm technique for solving the FRSE using orthogonal combinations of the second-kind CPs. The Galerkin method converts the FRSE with its underlying conditions into a matrix system whose entries are given explicitly. A suitable algebraic algorithm may be utilized to solve such a system, and by chance, the approximate solution can be obtained. We showcased the effectiveness and precision of the algorithm through a comprehensive study of the error analysis and by presenting multiple numerical examples. We think the proposed method can be applied to other types of FDEs. As an expected future work, we aim to employ this paper's developed theoretical results and suitable spectral methods to treat some other problems.

    W. M. Abd-Elhameed: Conceptualization, Methodology, Validation, Formal analysis, Funding acquisition, Investigation, Project administration, Supervision, Writing–Original draft, Writing–review & editing. A. M. Al-Sady: Methodology, Validation, Writing–Original draft; O. M. Alqubori: Methodology, Validation, Investigation; A. G. Atta: Conceptualization, Methodology, Validation, Formal analysis, Visualization, Software, Writing–Original draft, Writing–review & editing. All authors have read and agreed to the published version of the manuscript.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was funded by the University of Jeddah, Jeddah, Saudi Arabia, under grant No. (UJ-23-FR-70). Therefore, the authors thank the University of Jeddah for its technical and financial support.

    The authors declare that they have no competing interests.



    [1] Amin KI (1993) Jump Diffusion Option Valuation in Discrete Time. J Finance 48: 1833–1863. https://doi.org/10.1111/j.1540-6261.1993.tb05130.x doi: 10.1111/j.1540-6261.1993.tb05130.x
    [2] Black F, Scholes M (1973) The Pricing of Options and Corporate Liabilities. J Polit Econ 8: 637–654. https://doi.org/10.1086/260062 doi: 10.1086/260062
    [3] Brady Commission Report (1988) Report of the Presidential Task force on Market Mechanisms. Available from: http://www.archive.org/details/reportofpresiden01unit.
    [4] Cox J, Ross S, Rubinstein M (1979) Option Pricing: A Simplified Approach. J Financ Econ 7: 229–263. https://doi.org/10.1016/0304-405X(79)90015-1 doi: 10.1016/0304-405X(79)90015-1
    [5] Grant S, Quiggin J (2013) Inductive Reasoning about Unawareness. Econ Theory 54: 717–755. https://doi.org/10.1007/s00199-012-0734-y doi: 10.1007/s00199-012-0734-y
    [6] Halpern J, Rego L (2008) Interactive Unawareness Revisited. Games Econ Behav 62: 232–262. https://doi.org/10.1016/j.geb.2007.01.012 doi: 10.1016/j.geb.2007.01.012
    [7] Harrison JM, Kreps DM (1979) Martingales and Arbitrage in Multi-period Securities Market. J Econ Theory 20: 381–408. https://doi.org/10.1016/0022-0531(79)90043-7 doi: 10.1016/0022-0531(79)90043-7
    [8] Henderson PW, Peterson RA (1992) Mental Accounting and Categorization. Organ Behav Hum Decis Process 51: 92–117. https://doi.org/10.1016/0749-5978(92)90006-S doi: 10.1016/0749-5978(92)90006-S
    [9] Jackwerth JC (2000) Recovering Risk Aversion from Option Prices and Realized Returns. Rev Financ Stud 13: 433–451. https://doi.org/10.1093/rfs/13.2.433 doi: 10.1093/rfs/13.2.433
    [10] Li J (2008) A Note on Unawareness and Zero Probability. PIER Working Paper No. 08-022.
    [11] Mackenzie D (2004) The Big Bad Wolf and the Rational Market: Portfolio Insurance, the 1987 Crash and the Performativity of Economics. Econ Soc 33: 303–334.
    [12] McKenzie R (2018) A Brain-Focused Foundation for Economic Science. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-319-76810-6
    [13] Merton RC (1973) Theory of Rational Option Pricing. Bell J Econ Manage Sci 4: 141–183. https://doi.org/10.1142/9789812701022_0008 doi: 10.1142/9789812701022_0008
    [14] Merton RC (1976) Option Pricing when Underlying Stock Returns are Discontinuous. J Financ Econ 3: 125–144. https://doi.org/10.1016/0304-405X(76)90022-2 doi: 10.1016/0304-405X(76)90022-2
    [15] Quiggin J (2012) Zombie Economics: How dead ideas still walk among us. Publisher: Black Inc.
    [16] Quiggin J (2016) The value of information and the value of awareness. Theory Decis 80: 167–185. https://doi.org/10.1007/s11238-015-9496-x doi: 10.1007/s11238-015-9496-x
    [17] Rockenbach B (2004) The Behavioral Relevance of Mental Accounting for the Pricing of Financial Options. J Econ Behav Organ 53: 513–527. https://doi.org/10.1016/S0167-2681(03)00097-0 doi: 10.1016/S0167-2681(03)00097-0
    [18] Rubinstein M (1994) Implied Binomial Trees. J Finance 69: 771–818. https://doi.org/10.1111/j.1540-6261.1994.tb00079.x doi: 10.1111/j.1540-6261.1994.tb00079.x
    [19] Siddiqi H (2011) Does Coarse Thinking Matter for Option Pricing? Evidence from an Experiment. IUP J Behav Financ 8: 58–69.
    [20] Siddiqi H (2012) The Relevance of Thinking by Analogy for Investors' Willingness to Pay: An Experimental Study. J Econ Psychol 33: 19–29. https://doi.org/10.1016/j.joep.2011.08.008 doi: 10.1016/j.joep.2011.08.008
    [21] Siddiqi H (2019) Anchoring-Adjusted Option Pricing Models. J Behav Financ 20: 139–153. https://doi.org/10.1080/15427560.2018.1492922 doi: 10.1080/15427560.2018.1492922
    [22] Siddiqi H, Murphy JA (2021) The Resource-Constrained Brain: A New Perspective on the Equity Premium Puzzle. J Behav Financ. https://doi.org/10.1080/15427560.2021.1975716
    [23] Thaler R (1980) Toward a positive theory of consumer choice (1980). J Econ Behav Organ 1: 39–60. https://doi.org/10.1016/0167-2681(80)90051-7 doi: 10.1016/0167-2681(80)90051-7
    [24] Thaler R (1999) Mental Accounting Matters. J Behav Decis Mak 12: 183–206. https://doi.org/10.1002/(SICI)1099-0771(199909)12:3<183::AID-BDM318>3.0.CO;2-F doi: 10.1002/(SICI)1099-0771(199909)12:3<183::AID-BDM318>3.0.CO;2-F
  • QFE-06-03-021-s001.docx
  • This article has been cited by:

    1. Waleed Mohamed Abd-Elhameed, Omar Mazen Alqubori, On generalized Hermite polynomials, 2024, 9, 2473-6988, 32463, 10.3934/math.20241556
    2. Waleed Mohamed Abd-Elhameed, Omar Mazen Alqubori, Abdulrahman Khalid Al-Harbi, Mohammed H. Alharbi, Ahmed Gamal Atta, Generalized third-kind Chebyshev tau approach for treating the time fractional cable problem, 2024, 32, 2688-1594, 6200, 10.3934/era.2024288
    3. Waleed Mohamed Abd-Elhameed, Omar Mazen Alqubori, Ahmed Gamal Atta, A Collocation Procedure for Treating the Time-Fractional FitzHugh–Nagumo Differential Equation Using Shifted Lucas Polynomials, 2024, 12, 2227-7390, 3672, 10.3390/math12233672
    4. Waleed Mohamed Abd-Elhameed, Abdullah F. Abu Sunayh, Mohammed H. Alharbi, Ahmed Gamal Atta, Spectral tau technique via Lucas polynomials for the time-fractional diffusion equation, 2024, 9, 2473-6988, 34567, 10.3934/math.20241646
    5. M.H. Heydari, M. Razzaghi, M. Bayram, A numerical approach for multi-dimensional ψ-Hilfer fractional nonlinear Galilei invariant advection–diffusion equations, 2025, 68, 22113797, 108067, 10.1016/j.rinp.2024.108067
    6. Youssri Hassan Youssri, Waleed Mohamed Abd-Elhameed, Amr Ahmed Elmasry, Ahmed Gamal Atta, An Efficient Petrov–Galerkin Scheme for the Euler–Bernoulli Beam Equation via Second-Kind Chebyshev Polynomials, 2025, 9, 2504-3110, 78, 10.3390/fractalfract9020078
    7. Minilik Ayalew, Mekash Ayalew, Mulualem Aychluh, Numerical approximation of space-fractional diffusion equation using Laguerre spectral collocation method, 2025, 2661-3352, 10.1142/S2661335224500291
    8. Waleed Mohamed Abd-Elhameed, Omar Mazen Alqubori, Ahmed Gamal Atta, A Collocation Approach for the Nonlinear Fifth-Order KdV Equations Using Certain Shifted Horadam Polynomials, 2025, 13, 2227-7390, 300, 10.3390/math13020300
    9. Waleed Mohamed Abd-Elhameed, Omar Mazen Alqubori, Ahmed Gamal Atta, A collocation procedure for the numerical treatment of FitzHugh–Nagumo equation using a kind of Chebyshev polynomials, 2025, 10, 2473-6988, 1201, 10.3934/math.2025057
    10. M. Hosseininia, M.H. Heydari, D. Baleanu, M. Bayram, A hybrid method based on the classical/piecewise Chebyshev cardinal functions for multi-dimensional fractional Rayleigh–Stokes equations, 2025, 25, 25900374, 100541, 10.1016/j.rinam.2025.100541
    11. Waleed Mohamed Abd-Elhameed, Abdulrahman Khalid Al-Harbi, Omar Mazen Alqubori, Mohammed H. Alharbi, Ahmed Gamal Atta, Collocation Method for the Time-Fractional Generalized Kawahara Equation Using a Certain Lucas Polynomial Sequence, 2025, 14, 2075-1680, 114, 10.3390/axioms14020114
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1966) PDF downloads(238) Cited by(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog