Loading [MathJax]/jax/element/mml/optable/GreekAndCoptic.js
Research article

Fractional calculus of a product of multivariable Srivastava polynomial and multi-index Bessel function in the kernel F3

  • Received: 25 September 2019 Accepted: 08 January 2020 Published: 21 January 2020
  • MSC : 33C20, 33B15

  • In this article our main object to compute image formulas of generalized fractional hypergeometric operators, involving the product of multivariable Srivastava polynomial and multiindex Bessel function. The results obtained provide unification and an extension of known results given earlier by Agarwal and Nieto [1], Agarwal et al. [2] Mishra et al. [18], Saxena and Saigo [26], Suthar et al. [32]. We also consider certain special cases of derived results by specializing suitable value of the parameters.

    Citation: Owais Khan, Nabiullah Khan, Kottakkaran Sooppy Nisar, Mohd. Saif, Dumitru Baleanu. Fractional calculus of a product of multivariable Srivastava polynomial and multi-index Bessel function in the kernel F3[J]. AIMS Mathematics, 2020, 5(2): 1462-1475. doi: 10.3934/math.2020100

    Related Papers:

    [1] Abbarapu Ashok, Nadiminti Nagamani . Adaptive estimation: Fuzzy data-driven gamma distribution via Bayesian and maximum likelihood approaches. AIMS Mathematics, 2025, 10(1): 438-459. doi: 10.3934/math.2025021
    [2] Mustafa M. Hasaballah, Yusra A. Tashkandy, Oluwafemi Samson Balogun, M. E. Bakr . Reliability analysis for two populations Nadarajah-Haghighi distribution under Joint progressive type-II censoring. AIMS Mathematics, 2024, 9(4): 10333-10352. doi: 10.3934/math.2024505
    [3] Neveka M. Olmos, Emilio Gómez-Déniz, Osvaldo Venegas . The Gauss hypergeometric Gleser distribution with applications to flood peaks exceedance and income data. AIMS Mathematics, 2025, 10(6): 13575-13593. doi: 10.3934/math.2025611
    [4] Jiangrui Ding, Chao Wei . Parameter estimation for discretely observed Cox–Ingersoll–Ross model driven by fractional Lévy processes. AIMS Mathematics, 2023, 8(5): 12168-12184. doi: 10.3934/math.2023613
    [5] Chunhao Cai, Min Zhang . A note on inference for the mixed fractional Ornstein-Uhlenbeck process with drift. AIMS Mathematics, 2021, 6(6): 6439-6453. doi: 10.3934/math.2021378
    [6] M. Nagy, H. M. Barakat, M. A. Alawady, I. A. Husseiny, A. F. Alrasheedi, T. S. Taher, A. H. Mansi, M. O. Mohamed . Inference and other aspects for qWeibull distribution via generalized order statistics with applications to medical datasets. AIMS Mathematics, 2024, 9(4): 8311-8338. doi: 10.3934/math.2024404
    [7] Yichen Lv, Xinping Xiao . Grey parameter estimation method for extreme value distribution of short-term wind speed data. AIMS Mathematics, 2024, 9(3): 6238-6265. doi: 10.3934/math.2024304
    [8] Haidy A. Newer, Mostafa M. Mohie El-Din, Hend S. Ali, Isra Al-Shbeil, Walid Emam . Statistical inference for the Nadarajah-Haghighi distribution based on ranked set sampling with applications. AIMS Mathematics, 2023, 8(9): 21572-21590. doi: 10.3934/math.20231099
    [9] Naqash Sarfraz, Muhammad Aslam . Some weighted estimates for the commutators of p-adic Hardy operator on two weighted p-adic Herz-type spaces. AIMS Mathematics, 2021, 6(9): 9633-9646. doi: 10.3934/math.2021561
    [10] Hanan Haj Ahmad, Ehab M. Almetwally, Dina A. Ramadan . A comparative inference on reliability estimation for a multi-component stress-strength model under power Lomax distribution with applications. AIMS Mathematics, 2022, 7(10): 18050-18079. doi: 10.3934/math.2022994
  • In this article our main object to compute image formulas of generalized fractional hypergeometric operators, involving the product of multivariable Srivastava polynomial and multiindex Bessel function. The results obtained provide unification and an extension of known results given earlier by Agarwal and Nieto [1], Agarwal et al. [2] Mishra et al. [18], Saxena and Saigo [26], Suthar et al. [32]. We also consider certain special cases of derived results by specializing suitable value of the parameters.


    One of the most popular techniques for solving advection-type equations is the backward semi-Lagrangian method (BSLM), which has the advantage over conventional Eulerian approaches of using a larger temporal step size while avoiding the mesh deformation of pure Lagrangian methods. Due to these attributes, BSLMs have been much studied in a vast range of models since their introduction in the early 1980s [29] such as weather forecasting [31,33], oceanography [29,38], engineering control [16], and various important fluid dynamics applications ([3,4,5,6,15,23,30,32,36,37] and the references therein). In addition, numerous research efforts have been recently made to develop more efficient BSLM algorithms in computational aspects [10,21,24] as well as in mass conservation and multi-dimension [11,12,13].

    Their convergence analyses have been conducted in various problems including a second-order BSLM in linear advection–diffusion equations [19] and nonlinear advection-type equations [27] using the conservation property; second-order finite element BSLMs in the Navier–Stokes equations given a priori estimates [8,9], and second-order finite difference BSLMs in advection–diffusion equations with the Dirichlet boundary condition [22]. Despite these convergence analysis studies, there remained unaddressed nonlinear advection-type problems of finite difference BSLMs with a high-order spatial discretization due to the complexity and difficulties of understanding the structure for the Cauchy problem represented by a characteristic curve.

    The BSLM offers a way of dealing with the nonlinear advection term by splitting the model equation into a Cauchy problem and a total differential equation along characteristic curves. Even though this nonlinearity appears to vanish in the Lagrangian procedure, it is expressed as the velocity of the characteristic curve, which creates the uncertainty of its departure points. This uncertainty makes the convergence analysis more challenging because the truncation error of the departure points of Cauchy problem contains the errors of the solution from the previous temporal steps in the time discretization (see (2.16) and (2.17) for example). Analyzing the error arising from the diffusion term along the characteristic curves is another difficulty in high-order spatial discretizations. The second-order spatial discretization gives a positive-definite matrix for the diffusion term satisfying the Dirichlet boundary condition [22], which allows the Cholesky decomposition to split the errors into consecutive temporal steps. The other issue of the convergence analysis is in the temporal evolution of the truncation errors and bounds of the error from each time step. For the Dirichlet problem with the second-order spatial discretization, the discrete Poincaré inequality allows to evaluate the error at the final step using the spatial finite difference of the error. However, the mentioned techniques are no longer possible to analyze the BSLM with high-order spatial discretizations in the Dirichlet problem, which is the concern of this paper.

    This work is a part of the sequential research project on the convergence analysis of various proposed finite difference BSLMs due to [25,26,27,28] as well as a subsequent result of [22]. The periodic problem is permitted to discretize a diffusion term with a symmetric high-order spatial approximation. From this symmetry, as the second-order analysis, the Cholesky decomposition allows that the errors related the diffusion term can be split into the ones at two consecutive temporal steps, (see Lemma 4). For the analysis of the finite difference BSLM with a high-order spatial discretization, we apply the Cholesky decomposition of the positive definite matrix for a discretization of the Laplacian in the periodic problem. Instead of using the discrete Poincaré inequality which is no longer allowed in the periodic problem, we apply the telescoping summation technique in order to obtain the bound of the error at the final step using the finite difference of error in time. This not only allows a high-order approximation analysis for spatial discretization but also provides the general convergence analysis without boundary restriction for nonlinear advection–diffusion-type equations. More precisely, we consider Burgers' equation with the periodic boundary condition, which is a prototype of nonlinear advection-diffusion equations, as follows:

    {ut+uux=νuxx,a<x<b,0<tT,u(0,x)=u0(x),xI:=[a,b], (1.1)

    where u(t,x) represents the particle velocity, ν stands for the kinematic viscosity, and u0(x) is a given smooth function. To illustrate the BSLM for the model problem (1.1), we assume that the temporal steps are uniformly distributed throughout the domain [0,T]; that is, the grid points are formed by calculating

    tn:=nh,0nN

    with the temporal step size h=T/N. For each temporal step tn, let χ(x,tn+1;t) be the characteristic curve arriving the spatial point x at time tn+1 that satisfies the nonlinear Cauchy problem described as

    {ddtχ(x,tn+1;t)=u(t,χ(x,tn+1;t)),t<tn+1,χ(x,tn+1;tn+1)=x. (1.2)

    Combining (1.1) and (1.2), the total derivative of u(t,χ(x,tn+1;t)) along the characteristic curve χ(x,tn+1;t) provides the following diffusion equation given by

    ddtu(t,χ(x,tn+1;t))=νuxx(t,χ(x,tn+1;t)),0<t<tn+1,xI. (1.3)

    Notice that the characteristic curve χ(x,tn+1;t) satisfying the Cauchy problem (1.2) is used to evolve the solution u of (1.3) along the characteristic curve, while the solution u is used as a velocity for the characteristic curves with a given initial value at the fixed grid points. This self-consistency requirement of the system generates the main aforementioned hindrance to analyzing the constructed method. The nonlinear Cauchy problem can be solved by several approaches, including iteration methods [1,34,36], iteration-free methods [26,27], and so on. The solution of (1.3) is usually obtained by combining three procedures: a backward finite difference method for the total derivative, an interpolation technique for the solution at off-grid departure points, and a solver for the diffusion term, for instance, a finite difference method [14], a finite element method [3,4,14], a spectral finite element method [36], or a discontinuous Galerkin finite element method [35].

    In this paper, we present a concrete convergence analysis of the finite difference BSLM that adopts an iteration-free error correction method (ECM) [26,27] to solve the Cauchy problem (1.2), the second-order backward difference formula (BDF2) for the total derivative, and the fourth-order central finite difference scheme for the diffusion term in the discretization of the Eq (1.3). The substantial contribution of this study is to provide not only the comprehensive analysis of convergence and stability of a high-order spatial discretization BSLM but also the experiments to support the theoretical results. For the prerequisite process, we review the BSLM, which is modified from [26,27], to determine the numerical solution of the model problem while providing analysis of the local truncation errors of departure points obtained by the ECM, the temporal and spatial discretizations from the finite difference methods, BDF2 and the fourth-order central difference formula. Using the relations between these truncation errors and the boundedness properties of the interpolation, we provide the error equation of the proposed scheme at each time step with the help of the symmetric discretization for the diffusion term. Finally, after taking the telescoping summation of the error equations through all the temporal steps, we obtain the error of the numerical solution at the final time (see Theorem 3.6). The resulting order of convergence O(h2+x4+xp+1/h) with respect to the discrete 2-norm supports convergence analyses for linear advection-type equations [19] and finite element BSLMs [8,9], where p represents the degree of the interpolation. The result also validates the claim shown by the numerical results, including the non-monotonic property [17] of O(xp+1/h). We further investigate the stability of the proposed scheme by presenting that the discrete norm of the numerical solution is uniformly bounded without any restriction of the temporal step size h, which means that the proposed BSLM is unconditionally stable (see Theorem 3.7).

    The paper is organized as follows. The BSLM, comprising the ECM and finite difference schemes for the discretization of Eqs (1.2) and (1.3), is reviewed and local truncation errors are analyzed in Section 2. Based on the error equation, the main result of the paper, the convergence and stability analysis of the BSLM for Burgers' equation, are presented in Section 3. In Section 4, numerical experiments are presented to support the theoretical analysis. Finally, Section 5 presents our conclusions and further discussion. Additionally, in Appendix, we review the Lagrange interpolation formula and its properties.

    For the derivation of the BSLM, we review the ECM for solving the Cauchy problem in Subsection 2.1 and the fourth-order spatial discretization for the Helmholtz equation in Subsection 2.3. Especially, Subsection 2.2 is devoted to the estimation of a concrete bound for the truncation errors of its departure points.

    We start this subsection with a concise review of the ECM for solving the nonlinear self-consistent Cauchy problem given by

    {dχj(t)dt=u(t,χj(t)),t[tn1,tn+1),χj(tn+1)=xj, (2.1)

    where χj(t):=χj(xj,tn+1;t) is the characteristic curve that arrives at the grid point xj(1jJ) at a fixed temporal time step tn+1 and u is the solution of the problem (1.1). Here, we assume that spatial grids xj(1jJ) are uniformly spaced in the domain I:=[a,b] as follows:

    xj:=a+jx,x:=baJ. (2.2)

    The ECM then focuses on finding the departure points χj(tnk)=χ(xj,tn+1;tnk), k=0,1, which will be used for the BSLM. For notational simplicity, we denote um(x):=u(tm,x), umj:=um(xj), and um:=[um1,,umJ]T. Furthermore, we assume that approximations Um:=[Um1,,UmJ]T for the solutions um (mn) are already calculated at all spatial grids xj(1jJ) in the domain I.

    Let ϕ(t) be a perturbed characteristic curve given by an Euler polygon y(t) such that

    ϕ(t):=χj(t)y(t),y(t):=xj+(ttn+1)unj,t[tn1,tn+1]. (2.3)

    Then, by the Taylor expansion of χj(t) about t=tn+1 one can see that

    ϕ(t)=(un+1junj)(ttn+1)+12χj(ξ1)(ttn+1)2

    for some ξ1(t,tn+1). Again, the Taylor expansion of un+1j about t=tn implies that

    |ϕ(t)|h|ut(ξ2,xj)||ttn+1|+12|χj(ξ1)|(ttn+1)2Ch2, (2.4)

    where ξ2 belongs to the interval between tn and tn+1 and a constant C depends only on the bounds of u, ut and ux. Also, by (2.1)–(2.4), the derivative ϕ(t) satisfies

    ϕ(t)=unx(y(tn))ϕ(t)+u(t,y(t))unj+T1,ttn+1, (2.5)

    where T1 is bounded by

    |T1|Ch3 (2.6)

    for a constant C depending only on u,ut,ux,uxx, and uxt. We now use the bound (2.6) and the mid-point quadrature rule for integrating (2.5) over [tn1,tn+1] to obtain

    ϕ(tn+1)ϕ(tn1)=2h(unx(y(tn))ϕ(tn)+un(y(tn))unj)+O(h3).

    The property that ϕ(tn)=12(ϕ(tn+1)+ϕ(tn1))+h2ϕ(ξ3) for some ξ3(tn1,tn+1) with ϕ(tn+1)=0 yields

    (1+hunx(y(tn)))ϕ(tn1)=2h(unjun(y(tn)))+O(h3). (2.7)

    Since y(tn) is not a grid point in the usual sense, after truncating the asymptotic term of (2.7) the ECM uses the Lagrange interpolation to evaluate unx(y(tn)) and un(y(tn)). We then define the approximation ϕn1 of ϕ(tn1) by

    ϕn1:=2h(1+h˜DxLUn(yn))1(UnjLUn(yn)),yn:=xjhUnj, (2.8)

    where L and ˜DxL represent the Lagrangian interpolation polynomial of degree p and its piecewise average derivative at the grid points, respectively (see (A.1) for details). Here, the approximation yn of the Euler polygon y(t) at t=tn satisfies

    y(tn)yn=henj, (2.9)

    where enj is the error of un at xj such as

    enj:=unjUnj,en:=[en1,,enj,,enJ]T. (2.10)

    Instead of solving (2.1) directly, the iteration-free ECM uses the approximation ϕn1 of (2.8) and the relation (2.3) to obtain an approximation χn1j of χj(tn1) defined by

    χn1j:=yn1+ϕn1=xj2hUnj+ϕn1. (2.11)

    In addition, to find an approximation χnj for χj(tn) we use the Taylor expansion of χj(t) at tn1, given by

    χj(tn)=14(xj+3χj(tn1)+2hun1(χj(tn1)))+O(h3). (2.12)

    By using χn1j and LUn1(χn1j) and (2.12), we finally define χnj by

    χnj:=14(xj+3χn1j+2hLUn1(χn1j)). (2.13)

    In this subsection, we focus on estimating the truncation errors {\rm{ \mathsf{ τ} }}^{n-k}_j of the approximations \chi_j^{n-k} for two departure points { \chi}_j(t_{n-k}) defined by

    \begin{equation} {\rm{ \mathsf{ τ} }}^{n-k}_j: = { \chi}_j(t_{n-k})- \chi^{n-k}_{j},\quad k = 0,1. \end{equation} (2.14)

    From (2.7) and (2.8), the approximation error of the perturbed characteristic curve \phi(t) given by (2.4) satisfies

    \begin{equation} \begin{aligned} \Bigl(1 +h {\widetilde D}_{x}{\mathfrak L} \mathbf{U}^n({ \mathrm y}^n) \Bigr)(\phi(t_{n-1})-\phi^{n-1}) = & h \Bigl( {{\widetilde D}_x} {\mathfrak L} \mathbf{U}^n({ \mathrm y}^n) - u^n_x({y}(t_n))\Bigr)\phi(t_{n-1})\\ &+2h\Bigl(e^n_j+{\mathfrak L} \mathbf{U}^n({ \mathrm y}^n)-u^n(y(t_n)) \Bigr)+\mathcal{O}(h^3). \end{aligned} \end{equation} (2.15)

    Therefore, Eqs (2.3), (2.9), (2.11), and (2.15) make up the truncation error for the departure points

    \begin{equation} {\rm{ \mathsf{ τ} }}^{n-1}_j: = \Bigl(1 +h {\widetilde D}_{x}{\mathfrak L} \mathbf{U}^n({ \mathrm y}^n) \Bigr)^{-1} {\epsilon_{\rm{ \mathsf{ τ} }}}-2he^n_j, \end{equation} (2.16)

    where {\epsilon}_{\rm{ \mathsf{ τ} }} is given by

    \begin{equation} \begin{aligned} {\epsilon_{\rm{ \mathsf{ τ} }}} : = & h\left( {\widetilde{D}_x} {\mathfrak L} \mathbf{U}^n({ \mathrm y}^n) - u^n_x({y}(t_n))\right){ {\phi}} (t_{n-1})+ 2h \Bigl( e^n_j+ {{\mathfrak L}} {{ \mathbf{U}}}^n({ \mathrm y}^n) - {u}^n({y}(t_n))\Bigr)+\mathcal{O}(h^3). \end{aligned} \end{equation} (2.17)

    Moreover, Eqs (2.12) and (2.13) imply that the truncation error at t = t_n satisfies

    \begin{equation} {{\rm{ \mathsf{ τ} }}}^{n}_{j}: = \frac{3}{4} {{\rm{ \mathsf{ τ} }}}^{n-1}_{j}+\frac{h}{2}\left( {u}^{n-1}( { \chi}_{j}(t_{n-1}))- {\mathfrak L} {{ \mathbf{U}}}^{n-1} ( { \chi}_{j}^{n-1})\right)+\mathcal{O}(h^3). \end{equation} (2.18)

    To estimate the truncation errors of {\rm{ \mathsf{ τ} }}^{n-k}_j , we introduce {\delta}_x that is a first-order backward spatial finite difference operator given by

    \begin{equation} \begin{aligned} {\delta}_{x}{v}_j : = \frac{1}{\triangle x}\left({v}_j-{v}_{j-1}\right), \quad j = 1,\cdots,J. \end{aligned} \end{equation} (2.19)

    Lemma 2.1. Assume that the Euler polygon and its approximation { \mathrm y}^{n} and y(t_{n}) belong to the same subinterval I_j at t = t_{n} for each j . Then we have

    \begin{equation*} \begin{aligned} &{ \left| {\mathfrak L} \mathbf{U}^{n}({ \mathrm y}^{n}) - u^{n}({y}(t_{n})) \right|} \leq C \Big(\sum\limits_{j_\ell\in \Lambda_j}\left|e^{n}_{j_\ell}\right|+ h |e^{n}_j|+\triangle x^{p+1}\Big),\\ & \left|{\widetilde{D}_x} {\mathfrak L} \mathbf{U}^n({ \mathrm y}^n) - u^n_x({y}(t_n))\right|\leq C' \Big(\sum\limits_{j_\ell\in \Lambda'_j}\left|\delta_{x}e^n_{j_\ell}\right|+ h |e^n_j|+\triangle x^p\Big), \end{aligned} \end{equation*}

    where C and C' depend only on the degree p of the interpolation and the bounds of the first and second partial derivatives of u .

    Proof. Using \mathbf{e}^n in (2.10), we split the quantity |{\mathfrak L} \mathbf{U}^n({ \mathrm y}^n) - u^n({y}(t_n))| into three terms such as

    I_1: = |{\mathfrak L} \mathbf{e}^n({ \mathrm y}^n)|, \quad I_2: = | {\mathfrak L} \mathbf{u}^n ( \mathrm y^n) - u^n({ \mathrm y}^n)| , \mbox{ and } {I_3}: = |u^n({ \mathrm y}^n) - u^n({y}(t_n))|.

    The definition (A.1) of interpolation and its associated property (A.3), respectively, provide that

    I_1\leq C \sum\limits_{j_\ell\in \Lambda_{j}}\left|e^n_{j_\ell}\right| \mbox{ and } \quad I_2\leq C\triangle x^{p+1},

    where C is independent of the indices j_\ell and j . Also, the mean value theorem and the property (2.9) give I_3\leq Ch|e^n_j| for some constant C that depends only on the bounds of the partial derivatives of u , which implies that the first inequality holds. To estimate \widetilde{D}_x {\mathfrak L} \mathbf{U}^n({ \mathrm y}^n) - u^n_x\left({y}(t_n)\right) , we assume that \mathrm y^n is in the interior of I_j and split it into three terms. By virtue of Lemma A.1, (2.9), the mean value theorem, and the interpolation error (A.5), we have

    \begin{align*} \left|{D}_x {\mathfrak L} \mathbf{U}^n({ \mathrm y}^n) - u^n_x({y}(t_n))\right|&\leq \left|{D_x} {\mathfrak L} \mathbf{e}^n({ \mathrm y}^n)\right| +\left|{D_x}{\mathfrak L} \mathbf{u}^n ( \mathrm y^n) - u^n_x({ \mathrm y}^n)\right|+\left|u^n_x({ \mathrm y}^n) - u^n_x({y}(t_n))\right| \\ &\leq C \left(\sum\limits_{j_\ell\in \Lambda'_j}\left|\delta_{x}e^n_{j_\ell}\right|+ h |e^n_j|+\triangle x^p\right), \end{align*}

    where C depends only on the bounds of the partial derivatives of u . This establishes the second inequality. The estimates of the previous lemma give the invertibility of 1+h{\widetilde D}_{x}({\mathfrak L} \mathbf{U}^n)({ \mathrm y}^n) as follows.

    Corollary 2.2. Assume that for n < N , \mathbf{e}^n and \delta_x \mathbf{e}^n are sufficiently small. Then, there exists a temporal step size h < 1 such that 1+h{\widetilde D}_{x}{\mathfrak L} \mathbf{U}^n({ \mathrm y}^n) is invertible and

    \Big|\left(1+h{\widetilde D}_{x}{\mathfrak L} \mathbf{U}^n({ \mathrm y}^n) \right)^{-1}\Big| < 1+\epsilon,\quad \epsilon\ll 1.

    Proof. To show the invertibility of \big|{\widetilde D}_{x}{\mathfrak L} \mathbf{U}^n({ \mathrm y}^n)\big| for a small h < 1 , it is enough to show that it is uniformly bounded. This follows easily from the second result in Lemma 2.1 by considering

    \big|{\widetilde D}_{x}{\mathfrak L} \mathbf{U}^n({ \mathrm y}^n)\big| \leq \big|u^n_x({y}(t_n)) \big|+\big| {\widetilde D}_{x}{\mathfrak L} \mathbf{U}^n({ \mathrm y}^n)-u^n_x({y}(t_n)) \big|.

    Combining the outcomes of Lemma 2.1 and Corollary 2.2 yields the estimates for the truncation errors of the departure points at the previous two steps:

    Corollary 2.3. Assume that both { y}(t_{n-k}) and { \mathrm y}^{n-k} , the Euler polygon and its approximation as defined by (2.3), (2.8), and (2.11), belong to the same j^{th} subinterval I_{j} for k = 0, 1 . Then the truncation error {{\rm{ \mathsf{ τ} }}}^{n-k}_{j} (k = 0, 1) of the departure points in (2.14) can be estimated as follows:

    \begin{equation*} \begin{aligned} & \big|{{\rm{ \mathsf{ τ} }}}^{n-k}_{j}\big| \leq C\left[ {h^3} \Big( \sum\limits_{j_\ell\in \Lambda'_j}\left|\delta_{x}e^n_{j_\ell}\right|+ h |e^n_j|+\triangle x^p \Big) +h\Big( \sum\limits_{j_\ell\in \Lambda_j}\left(\left|e^n_{j_\ell}\right|+\left|e^{n-1}_{j_\ell}\right|\right)+ \triangle x^{p+1} \Big)\right], \quad \forall \,n < N \end{aligned} \end{equation*}

    for some constant C depending on p and on the bounds of u, u_t and u_x .

    Proof. Note that the quantity \epsilon_{\rm{ \mathsf{ τ} }} defined in (2.17) can be estimated from the bounds of (2.4) and Lemma 2.1. Thus, using the uniform bound of Corollary 2.2 yields the desired order of the estimate for {{\rm{ \mathsf{ τ} }}}^{n-1}_{j} . By applying the result for {{\rm{ \mathsf{ τ} }}}^{n-1}_{j} in (2.18) and Lemma 2.1, we can also have the bound of {{\rm{ \mathsf{ τ} }}}^{n}_{j} from the estimation

    \left|{u}^{n-1}( { \chi}_{j}(t_{n-1}))- {\mathfrak L} {{ \mathbf{U}}}^{n-1} ( { \chi}_{j}^{n-1})\right| \leq C\Big(\sum\limits_{j_\ell\in \Lambda_j}\left|e^{n-1}_{j_\ell}\right|+ \triangle x^{p+1} +\big|{{\rm{ \mathsf{ τ} }}}^{n-1}_{j}\big| \Big),

    which can be obtained by splitting into three terms as performed in the proof of Lemma 2.1.

    In this subsection, we review the BSLM [25,26,27] for solving (1.3). To do this, the fourth-order central finite difference method is chosen to discretize the diffusion term in space and the second-order BDF (BDF2) is used to discretize the total derivative using the departure points obtained in Subsection 2.1 with the interpolation {\mathfrak L} which is reviewed in Appendix.

    To this end, we first evaluate the diffusion equation (1.3) at time t = t_{n+1} with the setting s = t_{n+1} and then apply BDF2 to the total derivative, resulting the asymptotic one-dimensional Helmholtz equation

    \begin{equation} \frac{3}{2h}{u}^{n+1}({x})-\nu {u}^{n+1}_{xx}({x}) = \frac{1}{2h}\Bigl(4{u}^{n}\left({ \chi}({x}, t_{n+1};t_n)\right)-{u}^{n-1}\left({ \chi}({x}, t_{n+1};t_{n-1})\right)\Bigr)+\mathcal{O}\left(h^2\right). \end{equation} (2.20)

    For the discretization of the diffusion term u^{n}_{xx} , we employ the finite difference operator {\delta}_{x}^4 with the fourth-order accuracy defined by

    \begin{equation} \begin{aligned} \delta_{x}^4u^n_{j} &: = \frac{-u^n_{j-2}+16u^n_{j-1}-30u^n_{j}+16u^{n}_{j+1}-u^n_{j+2}}{12\triangle x^2}. \end{aligned} \end{equation} (2.21)

    Using the difference operator {\delta}_{x}^4 and the approximations \chi_{j}^{n-k}\; (k = 0, 1) for the departure points \chi_j(t_{n-k}): = \chi(x_j, t_{n+1};t_{n-k}) developed in the previous subsection, the Eq (2.20) can be discretized at each grid point {x}_{j} , as follows:

    \begin{equation} \begin{aligned} \frac{3}{2h} {u}^{n+1}_{j}-\nu {\delta}_{x}^4u^{n+1}_{j} & = \frac{1}{2h}\Bigl(4{u}^{n}\left({ \chi}_j(t_n)\right)-{u}^{n-1}\left({ \chi}_j(t_{n-1})\right)\Bigr)+\mathcal{O}\left(h^2+\triangle x^4\right)\\ & = \frac{1}{2h}\Bigl(4 u^{n} ( \chi^{n}_{j}) - u^{n-1} ( \chi^{n-1}_{j})\Bigr)+{\mathfrak T}^{n+1}_j,\\ {\mathfrak T}^{n+1}_j&: = \frac{1}{2h}\left(4u^n_x(\xi^0_j){\rm{ \mathsf{ τ} }}^{n}_j-u^{n-1}_x(\xi_j^1){\rm{ \mathsf{ τ} }}^{n-1}_j\right)+\mathcal{O}\left( h^2+\triangle x^4 \right) \end{aligned} \end{equation} (2.22)

    for some \xi^k_j\; (k = 0, 1) between \chi_j(t_{n-k}) and \chi_j^{n-k} . Using the interpolation polynomials {\mathfrak L} { \mathbf{U} }^{n-k}(\chi_j^{n-k}) for {u}^{n-k} (\chi_j^{n-k}) ( k = 0, 1 ) and truncating the asymptotic term {\mathfrak T}^{n+1}_j in (2.22), which is bounded by the result of Corollary 2.3, we introduce a discrete system for the approximation U^{n+1}_j of the solution u^{n+1}_j given by

    \begin{equation} \frac{3}{2h} {U}^{n+1}_{j}-\nu {\delta}_{x}^4 U^{n+1}_{j} = \frac{1}{2h} \Bigl( 4 {\mathfrak L} { \mathbf{U} }^{n} ( \chi^{n}_{j})-{\mathfrak L} \mathbf{U}^{n-1} ( \chi^{n-1}_{j})\Bigr),\quad 1 \leq j \leq J. \end{equation} (2.23)

    For convenience, let us define a vector {\mathfrak L} { \mathbf{U} }^{n-k} (\mathit{\boldsymbol{\chi}}^{n-k}) by

    \begin{equation*} {\mathfrak L} { \mathbf{U} }^{n-k} ( \mathit{\boldsymbol{\chi}}^{n-k}): = \left[{\mathfrak L} { \mathbf{U} }^{n-k} ( \chi^{n-k}_1) ,\cdots, {\mathfrak L} { \mathbf{U} }^{n-k} ( \chi^{n-k}_J)\right]^{\mathtt T}\in\mathbb{R}^J,\quad k = 0,1 \end{equation*}

    and a symmetric circulant matrix {\mathcal A} for the periodic model problem by

    \begin{equation} {\mathcal A} : = \left(a_{i,j}\right)_{J\times J},\quad a_{i,j} = \frac{1}{12\triangle x^2} \begin{cases} -30 & \text{if}\quad i = j,\\ 16 & \text{if}\quad |i-j| = 1 \text{ or } |i-j| = J-1,\\ -1 & \text{if}\quad |i-j| = 2 \text{ or } |i-j| = J-2 ,\\ 0 & \text{otherwise}. \end{cases} \end{equation} (2.24)

    Using these vectors and the matrix {\mathcal A} , the system of Eq (2.23) can be simplified as

    \begin{equation} \begin{aligned} &\left(\frac{3}{2h} \mathcal I- \nu \mathcal{A} \right){ \mathbf{U}}^{n+1} = \frac{1}{2h} \Bigl( 4 {\mathfrak L}{ \mathbf{U}}^{n} ( \mathit{\boldsymbol{\chi}}^{n})-{\mathfrak L} { \mathbf{U}}^{n-1}( \mathit{\boldsymbol{\chi}}^{n-1})\Bigr),\quad n = 1,\cdots, N-1, \end{aligned} \end{equation} (2.25)

    where {\mathcal I} denotes the identity matrix of size J\times J .

    Remark 2.4. The eigenvalue computations in [20,Theorem 3.1] for {\mathcal A} involving the periodic boundary condition can be used as an efficient solver for the system (2.25). Since the matrix \mathcal{A} defined by (2.24) is symmetric circulant its eigenvalue decomposition is given by

    \begin{equation} \begin{aligned} \mathcal{A} = \mathbf{Q} {\Lambda} \mathbf{Q}^{\mathtt T},&\quad {\Lambda}: = \text{diag}(\lambda_{1},\dots,\lambda_{j},\dots,\lambda_{J}), \quad\mathbf{Q}: = \left(\mathbf{q}^{1},\dots,\mathbf{q}^{j},\dots,\mathbf{q}^{J}\right),\\ & \lambda_{j} = \frac{- 1}{ 6 \triangle x^{2}}\left(\cos\left(\frac{4(j-1) \pi}{J}\right) -16\cos\left(\frac{2(j-1) \pi}{J}\right) +15\right), \\ &\mathbf{q}^{j} = \frac{1}{\sqrt{J}}\left[1, \cos\left(\frac{2 \pi j}{J} \right),\cdots, \cos\left(\frac{2 \pi j(J-1) }{J} \right) \right]^{\mathtt T}, \end{aligned} \end{equation} (2.26)

    where \mathbf{q}^{j} is the eigenvector corresponding to the eigenvalue \lambda_j . With the decomposition in (2.26), the system (2.25) can be solved by the following procedure:

    \begin{equation} \begin{aligned} &\widetilde{\mathbf v}^{n+1} = \frac{1}{2h} \mathbf{Q}^{\mathtt T} \Bigl( 4 {\mathfrak L}{ \mathbf{U}}^{n} ( \mathit{\boldsymbol{\chi}}^{n})-{\mathfrak L} { \mathbf{U}}^{n-1}( \mathit{\boldsymbol{\chi}}^{n-1})\Bigr),\\ & \left(\frac{3}{2h}{\mathcal I}- \nu\Lambda\right) {\mathbf{v}}^{n+1} = \widetilde{\mathbf v}^{n+1},\\ &{ \mathbf{U}}^{n+1} = \mathbf{Q} {\mathbf{v}}^{n+1}. \end{aligned} \end{equation} (2.27)

    Remark 2.4. Note that since \cos(2\theta)-16\cos (\theta)+15\geq 0 for 0\leq \theta \leq 2 \pi , all eigenvalues \lambda_{j} in (2.26) are non-positive real values. Thus, the entries of the diagonal matrix \frac{3}{2h}{\mathcal I}- \nu\Lambda in (2.27) are strictly positive. Therefore, the matrix \frac{3}{2h}{\mathcal I}- \nu\Lambda is invertible and hence the discretized system (2.27) is uniquely solvable.

    This section mainly aims to present the convergence analysis of the BSLM by manipulating the bounds of the truncation errors along the discrete time evolution. To do this, we first introduce several definitions and hypotheses to be used in the subsequent analysis. Let {\delta}_{t} be the first-order backward temporal finite difference operator for the vector {\mathbf v}^{n+1}: = [v^{n+1}_1, \ldots, v^{n+1}_J]^{\mathtt T}\in {\mathbb R}^{J} defined by

    \begin{equation} \begin{aligned} {\delta}_{t}\mathbf{v}^{n+1}: = \frac{1}{h}\left(\mathbf{v}^{n+1}-\mathbf{v}^{n}\right). \end{aligned} \end{equation} (3.1)

    Furthermore, let \langle {\mathbf f}, {\mathbf g} \rangle be the discrete \ell_2 -inner product defined by

    \begin{equation*} \label{sec2:eq23:norm} \langle {\mathbf{f}},{\mathbf{g}} \rangle: = \triangle x\sum\limits_{1\leq j\leq J}{f}_{j} {g}_{j},\quad {\mathbf f}: = \left[f_1,\cdots,f_J\right]^{\mathtt T},\quad{\mathbf g}: = \left[g_1,\cdots,g_J\right]^{\mathtt T} \end{equation*}

    and let \|\mathbf{g}\|_{2} be the corresponding discrete \ell_2 -norm for the vector {\mathbf g} defined by

    \begin{equation} \|\mathbf{g}\|_{2}^2 = \langle {\mathbf{g}},{\mathbf{g}} \rangle = \triangle x \sum\limits_{1\leq j\leq J} {g}_{j}^2. \end{equation} (3.2)

    The convergence analysis for the BSLM presented here is based on an induction technique under mesh restriction (see [8,9,22]); we assume the induction hypothesis for errors at each temporal step and mesh size restriction as follows:

    ( A_1 ) For N\geq2 , there exists a constant C_{A_1} > 0 independent of h and \triangle x satisfying

    \begin{equation*} \max\limits_{0\leq n\leq N-1}\big\|{\mathbf e}^{n}\big\|_2\leq C_{A_1}h^{2} \quad \text{ and } \quad \max\limits_{0\leq n\leq N-1}\big\|{\delta}_x{\mathbf e}^{n}\big\|_{2} \leq C_{A_1}h^{2} \end{equation*}

    for efficiently small \triangle x\ll 1 .

    ( A_2 ) We assume that the ratio between the spatial mesh size and temporal mesh size is satisfied by

    \begin{equation*} \frac{h^2}{\triangle x} = C_{A_2} \end{equation*}

    for some constant C_{A_2} > 0 .

    Remark 3.1. Applying the previous hypotheses to the result of Corollary 2.3, we have

    \begin{equation} \big|{{\rm{ \mathsf{ τ} }}}^{n-k}_{j}\big| \leq C_{\mathtt 1}h \left( {h^3}+\sum\limits_{j_\ell\in \Lambda_j} \left( \left|e^{n}_{j_\ell}\right|+\left|e^{n-1}_{j_\ell}\right|\right) +\triangle x^{p+1}\right), \quad { k = 0,1} \end{equation} (3.3)

    for C_{\mathtt 1} depending only on p , C_{A_1} and C_{A_2} , since

    \sum\limits_{j_\ell\in \Lambda'_j}\left|\delta_{x}e^n_{j_\ell}\right| \leq { \sqrt{p}}\left( \sum\limits_{j_\ell\in \Lambda'_j}\left|\delta_{x}e^n_{j_\ell}\right|^2\right)^{1/2}\leq \frac{1}{\sqrt{\triangle x}} \sqrt{p} \left\|\delta_{x}{\mathbf e}^n\right\|_2 \leq Ch,\quad C = \sqrt{p} C_{A_1}\sqrt{C_{A_2}}.

    We first derive an error equation for the proposed method using (2.22) and (2.23) as follows. From (2.22) and the fact that u^m(x) - {{\mathfrak L} \mathbf{u}^m}(x) = \mathcal{O}(\triangle x^{p+1}) for a sufficiently smooth solution u , subtracting (2.23) from (2.22) and multiplying the resulting equation by \frac{2}{3} lead to an equation for the error {\mathbf {e}}^{n+1} :

    \begin{equation} \left(\frac{1}{h}\mathcal{I} - \frac{2}{3}\nu{\mathcal A}\right){ \mathbf{e}}^{n+1} = \frac{1}{3h}\Bigl(4 {\mathfrak L} { \mathbf{e}}^n ({ \mathit{\boldsymbol{\chi}}}^{n}) -{\mathfrak L} {\mathbf {e}}^{n-1} ({ \mathit{\boldsymbol{\chi}}}^{n-1}) \Bigr) + \mathbf{r}^{n+1}, \end{equation} (3.4)

    where

    {\mathfrak L} { \mathbf{e}}^{n-k} ({ \mathit{\boldsymbol{\chi}}}^{n-k}): = \left[ {\mathfrak L} { \mathbf{e}}^{n-k} ({ \chi}^{n-k}_1), \cdots, {\mathfrak L} { \mathbf{e}}^{n-k} ({ \chi}^{n-k}_J)\right]^{\mathtt T},\quad k = 0,1,\quad n\geq1

    and

    \mathbf{r}^{n+1}: = \left[r_1^{n+1},\cdots,r_j^{n+1},\cdots,r_J^{n+1}\right]^{\mathtt T},\quad {r}_{j}^{n+1}: = \frac{1}{3h}\left( 4 u^n_x(\xi^0_j){\rm{ \mathsf{ τ} }}^{n}_j-u^{n-1}_x(\xi_j^1){\rm{ \mathsf{ τ} }}^{n-1}_j\right) +{{\mathcal O}_\Gamma },
    \begin{equation} \begin{aligned} {{\mathcal O}_\Gamma }: = {\mathcal{O}}\left({h^2}+ \triangle x^4+\frac{\triangle x^{p+1}}{h}\right). \end{aligned} \end{equation} (3.5)

    Note that from Remark 3.1, each component {r}_{j}^{n+1} can be estimated by

    \begin{equation} \left|{r}_{j}^{n+1}\right|\leq C_{\mathtt 2}\sum\limits_{j_\ell\in{\Lambda}_j} \left(\left|e^n_{j_\ell}\right|+\left|e^{n-1}_{j_\ell}\right|\right)+{ {{\mathcal O}_\Gamma } }, \end{equation} (3.6)

    where C_{\mathtt 2}: = \max\left\{ \frac{4}{3}C_{\mathtt 1}\|u_x^n\|_\infty, \frac{1}{3}C_{\mathtt 1}\|u^{n-1}_x\|_\infty\right\} . Taking the discrete \ell_2 -inner product with {\delta}_t {{ \mathbf{e}}}^{n+1} after some manipulation of (3.4), one can obtain

    \begin{equation} \begin{aligned} \left\| {\delta}_t {{ \mathbf{e}}}^{n+1} \right\|_{2}^2 -\frac{2\nu}{3}\langle{\mathcal A} {\mathbf {e}}^{n+1},{\delta}_{t} {\mathbf {e}}^{n+1}\rangle = &\frac{1}{3}\langle {\delta}_{t} {\mathbf {e}}^{n},{\delta}_{t} {\mathbf e}^{n+1}\rangle +\frac{4}{3h} \langle{{\mathfrak L}}{\mathbf {e}}^{ n}( \mathit{\boldsymbol{\chi}}^{n+1,n})- \mathbf{e}^{n},{\delta}_{t} {\mathbf {e}}^{n+1}\rangle\\ &-\frac{1}{3h} \langle {{\mathfrak L}}{\mathbf {e}}^{ n-1}( \mathit{\boldsymbol{\chi}}^{n+1,n-1})-{\mathbf e}^{ n-1},{\delta}_{t} {\mathbf {e}}^{n+1}\rangle+ \langle\mathbf{r}^{n+1},{\delta}_{t} {\mathbf {e}}^{n+1}\rangle. \end{aligned} \end{equation} (3.7)

    The basic idea for estimating \|{\mathbf {e}}^{n+1}\|_2 is to use an induction hypothesis with a bound of \left\| {\delta}_x {{ \mathbf{e}}}^{n+1} \right\|_{2} obtained by estimating each term of the Eq (3.7). We begin with an estimation of the last term \langle\mathbf{r}^{n+1}, {\delta}_{t} {\mathbf {e}}^{n+1}\rangle of (3.7) in the following lemma.

    Lemma 3.2. Suppose that all assumptions in Lemma 2.1 are satisfied. Then \langle\mathbf{r}^{n+1}, {\delta}_{t} {\mathbf {e}}^{n+1}\rangle can be estimated by

    \begin{equation*} \label{sec3:eq7} \left|\langle {\mathbf {r}}^{n+1} , {\delta}_{t}{ \mathbf{e}}^{n+1}\rangle \right| \leq C_{\mathtt 3} \sum\limits_{k = 0,1}\| \mathbf{e}^{n-k}\|_2^2+\frac{1}{6} \left\| {\delta}_{t}{{ \mathbf{e}}}^{n+1} \right\|_{2}^2+ {\mathcal O}_{\Gamma }^2 \end{equation*}

    for a constant C_{\mathtt 3}: = {6}C_{\mathtt 2}^2 (p+1)^2 , where {\mathcal O}_{\Gamma } is defined by (3.5).

    Proof. Applying the triangle inequality to (3.6) shows that

    \left|{r}_{j}^{n+1}\right|^2\leq 4 C_{\mathtt 2}^2(p+1)\sum\limits_{j_\ell\in \Lambda_j} \left(\left|e^n_{j_\ell}\right|^2+\left|e^{n-1}_{j_\ell}\right|^2\right)+ 2 {\mathcal O}_{\Gamma }^2.

    By the periodicity of e^{n-k}_j and the definition of discrete \ell_2 -norm, we have

    \begin{aligned} \|{\mathbf {r}}^{n+1}\|_2^2 \leq & 4 C_{\mathtt 2}^2(p+1)^2 \Bigl( \| \mathbf{e}^{n} \|_2^2+ \| \mathbf{e}^{n-1}\|_2^2\Bigr) +2 {\mathcal O}_{\Gamma }^2. \end{aligned}

    Using Young's inequality for the inner product \langle\mathbf{r}^{n+1}, {\delta}_{t} {\mathbf {e}}^{n+1}\rangle completes the proof.

    Next, for the estimation of \langle {{\mathfrak L}} \mathbf{e}^{n-k}({ \mathit{\boldsymbol{\chi}}}^{n-k})- \mathbf{e}^{n-k}, \delta_t \mathbf{e}^{n+1}\rangle ( k = 0, 1 ) in (3.7), let us define {\widetilde \chi}_{j}(t) to be an arbitrary approximation of the characteristic curve { \chi}_{j}(t) over [t_{n-1}, t_{n+1}] satisfying

    \begin{equation} \begin{aligned} &{\widetilde \chi}_{j}(t_{n -k}) = { \chi}_{j}^{n-k},\quad k = -1,0,1 ,\\ &\sup\limits_{ t_{n-k}\leq t\leq t_{n+1}} \left|\frac{d}{dt}{\widetilde \chi}_j(t)\right|\leq M < \infty \quad \forall j, \end{aligned} \end{equation} (3.8)

    where { \chi}_{j}^{n+1} = {x}_{j} and { \chi}_{j}^{n-k} ( k = 0, 1 ) are the approximations of the departure points { \chi}_{j}(t_{n-k}) and M is a fixed constant. Then the property (3.8) and the fundamental theorem for a given vector \mathbf{e}^{n-k} imply that

    \begin{equation} \begin{aligned} \left|{{\mathfrak L}} \mathbf{e}^{n-k}( { \chi}^{n-k}_j)-e^{n-k}_{j}\right| = &\left|{{\mathfrak L}}{\mathbf {e}}^{n-k} \left({\widetilde \chi}_{j}(t_{n-k})\right)-{{\mathfrak L}} {\mathbf {e}}^{n-k} \left({\widetilde \chi}_{j}(t_{n+1})\right)\right|\\ \leq& \int_{t_{n-k}}^{t_{n+1}} \Bigl|\frac{d}{dx} {{\mathfrak L}}{ \mathbf{e}}^{n-k} ({\widetilde \chi}_{j}(t) )\Bigr|\Bigl| \frac{d{\widetilde \chi}_{j}}{dt}( t)\Bigr| dt\\ \leq& M \int_{t_{n-k}}^{t_{n+1}} \Bigl|\frac{d}{dx} {{\mathfrak L}}{ \mathbf{e}}^{n-k} \left({\widetilde \chi}_{j}(t)\right)\Bigr|dt. \end{aligned} \end{equation} (3.9)

    Thus, if we assume {\widetilde \chi}_{j}(t) \in {I}_{j} for all t\in [t_{n-1}, t_{n+1}] , then by applying Hölder's inequality and using the results of Lemma A.1 on the right-hand side of (3.9), we obtain

    \begin{equation} \begin{aligned} \int_{t_{n-k}}^{t_{n+1}} \Bigl|\frac{d}{dx} {{\mathfrak L}}{ \mathbf{e}}^{n-k} \left({\widetilde \chi}_{j}(t)\right)\Bigr|dt \leq d_1 (k+1) h \left(\sum\limits_{ j_\ell \in\Lambda'_j} \left|\delta_{x}e^{n-k}_{j_\ell}\right|^2 \right)^{\frac{1}{2}}, \end{aligned} \end{equation} (3.10)

    where d_1 is a constant defined in Lemma A.1. Using this bound, we estimate the middle two terms of the right-hand side in (3.7) in the following lemma.

    Lemma 3.3. Suppose that the approximation {\widetilde \chi}_{j}(t) is included in the subinterval I_{j} for 1\leq j \leq {J} and the derivatives \frac{d {\widetilde \chi}_{j} }{dt}(t) satisfy the conditions of (3.8). Then there exist constants C_{\mathtt 4}^k > 0, \; k = 1, 2 depending on p and M for vectors {\delta}_{x} \mathbf{e}^{n-k}: = \left[\delta_{x} e_{1}^{n-k}, \delta_{x} e_{2}^{n-k}, \dots, \delta_{x} e_{J}^{n-k} \right]^{\mathtt T}\; (k = 0, 1) such that

    \begin{equation} \left|\frac{b_k}{3h}\langle {{\mathfrak L}}{{ \mathbf{e}}}^{ n-k}( \mathit{\boldsymbol{\chi}}^{n-k})-{ \mathbf{e}}^{ n-k},{\delta}_{t}{{ \mathbf{e}}}^{n+1} \rangle \right| \leq \frac{3}{2}C_{\mathtt 4}^k \left\| {\delta}_{x} {\mathbf {e}}^{n-k}\right\|^2_{2}+\frac{1}{6} \left\| {\delta}_{t}{{ \mathbf{e}}}^{n+1} \right\|_{2}^2,\quad b_0 = {4}, b_1 = {1}. \end{equation} (3.11)

    Proof. By the definition of the discrete \ell_2 -norm and the periodicity of \delta_{x}e^{n-k}_{j} , the inequality (3.10) shows

    \begin{equation*} \| {{\mathfrak L}}{\mathbf {e}}^{n-k}( \mathit{\boldsymbol{\chi}}^{n-k})- \mathbf{e}^{n-k}\|^2_{2} \leq pd_1^2M^2(k+1)^2h^2 \| {\delta}_{x} {\mathbf {e}}^{n-k}\|^2_{2},\quad k = 0, 1. \end{equation*}

    Thus, applying Young's inequality and choosing C_{\mathtt 4}^k: = \frac{1}{9}pd_1^2M^2b_k^2 (k+1)^2 complete the proof.

    We now estimate the term \langle \mathcal A{ \mathbf{e}}^{n+1}, {\delta}_{t}{ \mathbf{e}}^{n+1} \rangle in (3.7). For this, let {\delta}^2_x be the second-order central difference operator given by

    {\delta}^2_x {e}^{n}_j: = \frac{{e}^{n}_{j-1}-2{e}^{n}_j+{e}^{n}_{j+1}}{\triangle x^2}.

    Then the finite difference operator \delta_{x}^4 { \mathbf{e}}^{n+1}_{j} defined by (2.21) can be written in terms of the operators \delta_x and \delta^2_x as follows

    \begin{equation} \delta_{x}^4 { \mathbf{e}}^{n+1}_{j} = \frac{1}{12}\delta_x{\mathcal E}^{n+1}_{j+1},\quad {\mathcal E}^{n+1}_j: = -\triangle x\left(\delta_{x}^2 {e}^{n+1}_{j} - \delta_{x}^2 {e}^{n+1}_{j-1}\right)+12\delta_{x} {e}^{n+1}_{j}. \end{equation} (3.12)

    Using the expression in (3.12), we have the following result.

    Lemma 3.4. For n\geq0 , the symmetric circulant matrix {\mathcal A} defined in (2.24) satisfies

    \begin{equation*} \langle {\mathcal A} {{ \mathbf{e}}}^{n+1}, {\delta}_{t}{\mathbf {e}}^{n+1} \rangle \leq \frac{1}{2h}\left(\left\|{\delta}_x {{ \mathbf{e}}}^{n}\right\|^2_2-\left\|{\delta}_x {{ \mathbf{e}}}^{n+1}\right\|^2_2 \right)+ \frac{\triangle x^2}{24h}\left(\left\|{\delta}_x^2 {{ \mathbf{e}}}^{n}\right\|^2_2-\left\|{\delta}_x^2 {{ \mathbf{e}}}^{n+1}\right\|^2_2\right). \end{equation*}

    Proof. From discrete summation by parts with the periodicity of \mathbf{e}^{n+1} , the term \langle \mathcal A{ \mathbf{e}}^{n+1}, {\delta}_{t}{ \mathbf{e}}^{n+1} \rangle can be expressed as

    \begin{equation} \begin{aligned} \langle {\mathcal A} {\mathbf {e}}^{n+1}, {\delta}_{t}{\mathbf {e}}^{n+1} \rangle & = \triangle x\sum\limits_{1\leq j \leq J} \left({\delta}_{x}^4 {e}^{n+1}_{j}\right)\left( {\delta}_t {e}^{n+1}_{j}\right)\\ & = \frac{\triangle x}{12}\sum\limits_{1\leq j \leq J}\left(\delta_x {\mathcal E}^{n+1}_{j+1}\right)\left({\delta}_t {e}^{n+1}_{j}\right)\\ & = -\frac{\triangle x}{12 } \sum\limits_{1\leq j \leq J} {\mathcal E}^{n+1}_{j+1} \left(\delta_x \delta_t {e}^{n+1}_{j+1}\right)+\frac{1}{12}\left({\mathcal E}^{n+1}_{J+1}\delta_te^{n+1}_{J+1}-{\mathcal E}^{n+1}_1\delta_t e^{n+1}_1\right)\\ & = -\frac{\triangle x}{12 } \sum\limits_{1\leq j \leq J} {\mathcal E}^{n+1}_{j+1} \left( \delta_x \delta_t {e}^{n+1}_{j+1}\right). \end{aligned} \end{equation} (3.13)

    Also, from the definition of {\mathcal E}^{n+1}_j and the property {\delta}_{x}{\delta}_t {{ \mathbf{e}}}^{n+1} = {\delta}_t {\delta}_{x}{{ \mathbf{e}}}^{n+1} , the term \langle {\mathcal A} {\mathbf {e}}^{n+1}, {\delta}_{t}{\mathbf {e}}^{n+1} \rangle in (3.13) can be split into two terms as follows:

    \begin{equation} \langle {\mathcal A} { \mathbf{e}}^{n+1}, {\delta}_{t}{ \mathbf{e}}^{n+1} \rangle : = {\mathcal A_1}+{\mathcal A_2}, \end{equation} (3.14)
    {\mathcal A_1} : = \frac{\triangle x^2}{12}\sum\limits_{1\leq j \leq J} \left(\delta_{x}^2{e}^{n+1}_{j+1} - \delta_{x}^2 {e}^{n+1}_{j}\right) \left( \delta_t \delta_x {e}^{n+1}_{j+1}\right), \quad {\mathcal A_2}: = -\triangle x \sum\limits_{1\leq j \leq J} \left(\delta_{x}{e}^{n+1}_{j+1}\right) \left( \delta_t \delta_x {e}^{n+1}_{j+1}\right).

    Using the periodicity of \mathbf{e}^{n+1} and discrete summation by parts again, the term {\mathcal A_1} can be estimated as follows:

    \begin{equation} \begin{aligned} {\mathcal A_1}& = -\frac{\triangle x^2}{12}\Bigl[ \sum\limits_{1\leq j \leq J} \left(\delta_{x}^2 {e}^{n+1}_{j+1} \right) \left(\delta_t\delta_{x} {e}^{n+1}_{j+2}-\delta_t\delta_{x} {e}^{n+1}_{j+1} \right) - \left( \delta_{x}^2 {e}^{n+1}_{J+1}\right) \left( \delta_t\delta_{x}{e}^{n+1}_{J+1}\right) + \left( \delta_{x}^2 {e}^{n+1}_{1}\right)\left( \delta_t\delta_{x} {e}^{n+1}_{1}\right)\Bigr]\\ & = -\frac{\triangle x^3}{12h }\sum\limits_{1\leq j \leq J} \delta_{x}^2 {e}^{n+1}_{j+1} (\delta_x^2 e^{n+1}_{j+1}-\delta_x^2 e^{n}_{j+1})\\ & = - \frac{\triangle x^2}{12h }\| \delta_{x}^2{ \mathbf{e}}^{n+1} \|_2^2+ \frac{\triangle x ^2}{12h } \langle \delta_{x}^2{ \mathbf{e}}^{n+1} , \delta_{x}^2{ \mathbf{e}}^{n} \rangle. \end{aligned} \end{equation} (3.15)

    Similarly, from the definition of the operator \delta_t , the term {\mathcal A_2} can be estimated as

    \begin{equation} \begin{aligned} {\mathcal A_2}& = -\frac{\triangle x}{h} \sum\limits_{1\leq j \leq J}\delta_{x}{e}^{n+1}_{j+1} \left( \delta_x {e}^{n+1}_{j+1} - \delta_x {e}^{n}_{j+1}\right) \\ & = -\frac{1}{h}\| \delta_{x}{ \mathbf{e}}^{n+1}\|^2_2+\frac{1}{h} \langle \delta_{x}{ \mathbf{e}}^{n+1} , \delta_x { \mathbf{e}}^{n}\rangle. \end{aligned} \end{equation} (3.16)

    Finally, combining (3.15) and (3.16), the term \langle {\mathcal A} {{ \mathbf{e}}}^{n+1}, {\delta}_{t}{\mathbf {e}}^{n+1} \rangle can be bounded as follows:

    \begin{equation} \begin{aligned} h\langle {\mathcal A} {{ \mathbf{e}}}^{n+1}, {\delta}_{t}{\mathbf {e}}^{n+1} \rangle & = -\left(\left \| {\delta}_{x} {{ \mathbf{e}}}^{n+1}\right\|^2_{2}-\langle {\delta}_{x} {{ \mathbf{e}}}^{n+1} ,\delta_{x} {{ \mathbf{e}}}^{n}\rangle \right) - \frac{\triangle x^2}{12}\left(\left\| {\delta}_{x} ^2{{ \mathbf{e}}}^{n+1}\right\|^2_{2} -\langle {\delta}_{x} ^2 {{ \mathbf{e}}}^{n+1} , {\delta}_{x}^2 {{ \mathbf{e}}}^{n}\rangle\right)\\ &\leq \frac{1}{2}\left(\left\|{\delta}_x {{ \mathbf{e}}}^{n}\right\|^2_2-\left\|{\delta}_x {{ \mathbf{e}}}^{n+1}\right\|^2_2 \right)+ \frac{\triangle x^2}{24}\left(\left\|{\delta}_x^2 {{ \mathbf{e}}}^{n}\right\|^2_2-\left\|{\delta}_x^2 {{ \mathbf{e}}}^{n+1}\right\|^2_2\right) \end{aligned} \end{equation} (3.17)

    with the help of Young's inequality. This completes the proof.

    Remark 3.5. The negative of {\mathcal A} is positive-definite, which allows the Choleksy decomposition to split the errors as shown in the first equation of (3.17). This is the discrete version of a common integration by parts for the Laplacian with the periodic boundary condition.

    Combining the results of Lemmas 3.2–3.4, we can establish our main theorem, namely the following convergence theorem for the proposed scheme.

    Theorem 3.6. Suppose that {\delta}_t {{ \mathbf{e}}}^{1} , {\delta}_x{{ \mathbf{e}}}^{1} , and {\delta}_x^2 {{ \mathbf{e}}}^{1} are bounded and the hypotheses in Lemmas 3.2 and 3.3 are satisfied. Then there exists constant C_{\mathtt{5}^*} such that

    \begin{equation*} \begin{aligned} & \| \mathbf{e}^{N}\|_2 \leq C_{\mathtt{5}^*} \left( \sqrt{h}\left\| \delta_t{{ \mathbf{e}}}^{1 } \right\|_{2} + \sqrt{\nu} \left\|{\delta}_x {{ \mathbf{e}}}^{1}\right\|_2+ \sqrt{\nu} {\triangle x} \left\|{\delta}_x^2 {{ \mathbf{e}}}^{1}\right\|_2+ {h^2}+ \triangle x^4+\frac{\triangle x^{p+1}}{h} \right), \end{aligned} \end{equation*}

    where C_{\mathtt{5}^*} depends only on C_{\mathtt {3}}, {C_{\mathtt 4}^k}, C_{A_1}, C_{A_2}, p, \nu, M, and T .

    Proof. We apply Young's inequality on the first term of (3.7), i.e., \left|\langle {\delta_{t}}{ \mathbf{e}}^{n}, {\delta}_{t}{{ \mathbf{e}}}^{n+1} \rangle \right| \leq \frac{1}{2}\left\|{\delta}_{t}{{ \mathbf{e}}}^{n}\right\|_{2}^2+\frac{1}{2}\left\|{\delta}_{t}{{ \mathbf{e}}}^{n+1} \right\|_{2}^2 and combine the results of Lemmas 3.2–3.4 into (3.7) to obtain

    \begin{equation} \begin{aligned} \frac{1}{3} \left\| {\delta}_t {{ \mathbf{e}}}^{n+1 } \right\|^2_{2}+ \frac{\nu}{3 h} \left(\left\|{\delta}_x {{ \mathbf{e}}}^{n+1}\right\|^2_2+ \frac{\triangle x^2}{12}\left\|{\delta}_x^2 {{ \mathbf{e}}}^{n+1}\right\|^2_2\right) \leq & \frac{1}{6} \left\| {\delta}_t {{ \mathbf{e}}}^{n } \right\|^2_{2}+\frac{ \nu}{3h} \left(\left\|{\delta}_x {{ \mathbf{e}}}^{n}\right\|^2_2+ \frac{\triangle x^2}{12}\left\|{\delta}_x^2 {{ \mathbf{e}}}^{n}\right\|^2_2\right) \\ & + C_{\mathtt 5} \sum\limits_{k = 0,1} \left\|{\delta}_x {{ \mathbf{e}}}^{n-k}\right\|^2_2 + {{{\mathcal O}^2 _\Gamma }}, \end{aligned} \end{equation} (3.18)

    where C_{\mathtt 5}: = \max \left\{C_{\mathtt 3}, \frac{3}{2}C^k_{\mathtt 4}, k = 0, 1 \right\} . By adding \frac{1}{6}\left\|{\delta}_t {{ \mathbf{e}}}^{1} \right\|^2_{2} on both sides after summing from n = 0 to N-1 in (3.18) under the induction hypothesis ( A_1 ) on the term \|\delta_x \mathbf{e}^{m}\|_{2} for m < N , we have

    \begin{equation} \begin{aligned} \frac{1}{6}\sum\limits_{n = 1}^{N} \left\| {\delta}_t {{ \mathbf{e}}}^{n } \right\|^2_{2} + \frac{\nu}{3h}\left(\left\|{\delta}_x {{ \mathbf{e}}}^{N}\right\|^2_2+ \frac{\triangle x^2}{12}\left\|{\delta}_x^2 {{ \mathbf{e}}}^{N}\right\|^2_2\right) \leq& \frac{1}{3}\left\| {\delta}_t {{ \mathbf{e}}}^{1 } \right\|^2_{2}+\frac{ \nu}{3h}\left(\left\|{\delta}_x {{ \mathbf{e}}}^{1}\right\|^2_2+ \frac{\triangle x^2}{12}\left\|{\delta}_x^2 {{ \mathbf{e}}}^{1}\right\|^2_2\right)\\ & + 2C_{\mathtt 5}\sum\limits_{n = 1}^{N-1} \left\|{\delta}_x {{ \mathbf{e}}}^{n}\right\|^2_2 + N{\mathcal O}_{\Gamma}^2, \end{aligned} \end{equation} (3.19)

    in which the asymptotic order h^2 from the induction hypothesis is incorporated in {\mathcal O}_{\Gamma}^2 . Since \frac{1}{hT}\left\| {{ \mathbf{e}}}^{N} \right\|^2_{2} = \frac{1}{h^2N}\left\| {{ \mathbf{e}}}^{N}- {{ \mathbf{e}}}^{0} \right\|^2_{2} = \frac{1}{N}\left\|\sum\limits_{n = 1}^{N} {\delta}_t {{ \mathbf{e}}}^{n } \right\|^2_{2} \leq \sum\limits_{n = 1}^{N} \left\| {\delta}_t {{ \mathbf{e}}}^{n } \right\|^2_{2} from the telescoping summation, we can obtain that multiplying by 6hT

    \begin{equation} \left\| \mathbf{e}^N \right\|^2_{2}+2{\nu}T\left( \left\|{\delta}_x {{ \mathbf{e}}}^{N}\right\|^2_2+ \frac{\triangle x^2}{12}\left\|{\delta}_x^2 {{ \mathbf{e}}}^{N}\right\|^2_2\right) \leq 2T \left(h\left\| {\delta}_t {{ \mathbf{e}}}^{1 } \right\|^2_{2}+ \nu \left\|{\delta}_x {{ \mathbf{e}}}^{1}\right\|^2_2+\nu \frac{\triangle x^2}{12}\left\|{\delta}_x^2 {{ \mathbf{e}}}^{1}\right\|^2_2+3T{\mathcal O}_{\Gamma}^2 \right), \end{equation} (3.20)

    which yields the desired estimate of \| \mathbf{e}^{N}\|_{2} with the generic constant C_{\mathtt 5^*} from T and {\mathcal O}_{\Gamma} .

    Remark 3.7. (1) In (3.19), the estimations of \|\delta_x \mathbf{e}^{m}\|_2 for m < N from the induction hypothesis are applied to obtain the bound of \| \mathbf{e}^{N}\|_2 . Equations (3.19) and (3.20) show that the bound of \|\delta_x \mathbf{e}^{N}\|_2 increases when either T \rightarrow \infty or \nu\rightarrow 0 , and so does \| \mathbf{e}^{N}\|_2 , cognate to a result shown in [1,5,22].

    (2) Note that for the Dirichlet problem [22,Lemma 2], it satisfies e^N(x_j) = 0 for all x_j\in \partial I = \{a, b\} , so that \| \mathbf{e}^N\|_2\leq C\|\delta_x \mathbf{e}^{N}\|_2 for constant C = \frac{1}{2}(b-a)^2 , which is the discrete Poincaré inequality. But, the analysis of the model problem with the periodic boundary condition is no longer allowed to use this inequality as mentioned in the introduction. To overcome this structural drawback, we have instead investigated the error \delta_t \mathbf{e}^n using the temporal finite difference operator in each time step. By applying the property of the telescoping summation of \delta_t \mathbf{e}^n for n\leq N , the error \mathbf{e}^{N} at the final step can be estimated without the use of the bound \|\delta_x \mathbf{e}^{N}\|_2 . This approach gives more general convergence analysis of the BSLMs for solving advection-type equations without restrictions of boundary conditions compared to the strategy in [22].

    (3) The error expansion presented here is similar to the result for cases of linear advection problems [17] and also nonlinear problems [22] with Dirichlet boundary condition. In particular, the non-monotonic dependence of the temporal step size \mathcal{O}\left({\triangle x^{p+1}}/{h}\right) is a well-known result, and indicates that the superior accuracy of semi-Lagrangian schemes with smaller temporal step size may not always be achieved. As expected, the other two terms in \mathcal{O}(h^2+\triangle x^4) are derived from global truncation errors in the ECM strategy equipped with BDF2 for the Cauchy problem and fourth-order central finite difference for the diffusion term. Therefore, the precise convergence error of BSLM depends on the scheme of backward integration for solving the Cauchy problem as well as the interpolation formula for estimating the solution at departure points.

    (4) The above convergence analysis can be naturally extended in higher-dimensional problems: For example, the truncation errors of the characteristic curves on departure points in the two-dimensional problem satisfy consistent results with Lemma 2.1, Corollaries 2.2 and 2.3, (see [22] for details). With the help of induction hypotheses, one can show that the error equation corresponding to (3.4) satisfies the same asymptotic behavior in (3.5). Furthermore, the positive definite matrix, the negative of {\mathcal A} for diffusion term (2.24) in a higher-dimensional problem can be expressed as a sum of the tensor product of the identity matrix and a positive definite matrix, so that the result of Lemma 3.4 can be easily extended. The rest of the process to obtain the error at the final step is achieved by the temporal evolution as Theorem 3.6 in a similar way. Due to space limitations, we only provide numerical results, (see Example 4.2 in Section 4).

    In this subsection, we analyze the numerical stability of the proposed semi-Lagrangian scheme by showing the boundedness of the numerical solution for the model problem (1.1) with respect to the discrete {\ell}_2 -norm without any restriction of temporal step size h and spatial step size \triangle x . As mentioned in the Introduction, the stability analysis of the Dirichlet Problem can be obtained by using the discrete Poincaré inequality from the boundary condition, which is not permissible for the periodic problem. For the stability of the BSLM with periodic boundary, we instead begin by applying the discrete Grönwall's lemma to achieve the bound of \delta_x {{ \mathbf{U}}}^{m} for each m < N and then obtain the bound of {{ \mathbf{U}}}^{N} by summation of \left\| \delta_x {{ \mathbf{U}}}^{m} \right\|_{2} from m = 0 to N-1.

    Theorem 3.8. Assume that the trajectories of the characteristic curves satisfy that \chi^{n-k}_j\in I_j , (k = 0, 1) and the condition (3.8) for all j = 1, \cdots, J . If {{ \mathbf{U}}}^{0} is bounded, so that {\delta}_x {{ \mathbf{U}}}^{0} and {\delta}^2_x {{ \mathbf{U}}}^{0} are also bounded, then the discrete solution \mathbf{U}^N of the proposed BSLM is unconditionally stable in the following sense:

    \| \mathbf{U}^N\|_2 \leq C_{s} \left(\left\|{{ \mathbf{U}}}^{0}\right\|_2+ \left\|{\delta}_x {{ \mathbf{U}}}^{0}\right\|_2+ \triangle x \left\|{\delta}^2_x {{ \mathbf{U}}}^{0}\right\|_2 \right),\quad N\geq2,

    where C_{s} depends only on \nu , p , M and T and is independent of h and \triangle x .

    Proof. We multiply the numerical system (2.25) by {2}/{3} and add -{ \mathbf{U}}^n/h to obtain that

    \begin{equation} \delta_t { \mathbf{U}}^{n+1}- \frac{2\nu}{3} {\mathcal A} { \mathbf{U}}^{n+1} = \frac{1}{3}\delta_t { \mathbf{U}}^{n} - \frac{4 }{3h} \left( { \mathbf{U}}^{n} - {\mathfrak L}{ \mathbf{U}}^{n}( \mathit{\boldsymbol{\chi}}^{n} ) \right) + \frac{1}{3h} \left({ \mathbf{U}}^{n-1}- {\mathfrak L} \mathbf{U}^{n-1}( \mathit{\boldsymbol{\chi}}^{n-1} ) \right). \end{equation} (3.21)

    Applying the inner product with \delta_t{ \mathbf{U}}^{n+1} in (3.21) yields

    \begin{equation} \begin{aligned} \| \delta_t{ \mathbf{U}}^{n+1} \|^2_2-\frac{2\nu}{3} \langle {\mathcal A} { \mathbf{U}}^{n+1} , \delta_t { \mathbf{U}}^{n+1} \rangle = & \frac{1}{3} \langle \delta_t { \mathbf{U}}^{n } , \delta_t{ \mathbf{U}}^{n+1}\rangle- \frac{4 }{3h} \langle { \mathbf{U}}^{n} - {\mathfrak L} \mathbf{U}^{n}( \mathit{\boldsymbol{\chi}}^{n} ), \delta_t { \mathbf{U}}^{n+1} \rangle \\ &+ \frac{1 }{3h} \langle { \mathbf{U}}^{n-1} - {\mathfrak L} \mathbf{U}^{n-1}( \mathit{\boldsymbol{\chi}}^{n-1} ) , \delta_t { \mathbf{U}}^{n+1} \rangle. \end{aligned} \end{equation} (3.22)

    We apply \mathbf{U}^n instead of using \mathbf{e}^n as done in Lemma 3.3 and (3.17) respectively, using similar techniques to obtain

    \begin{equation} \begin{aligned} \left|\frac{b_k}{3h}\langle {{\mathfrak L}}{{ \mathbf{U}}}^{ n-k}( \mathit{\boldsymbol{\chi}}^{n+1,n-k})-{ \mathbf{U}}^{ n-k},{\delta}_{t}{{ \mathbf{U}}}^{n+1} \rangle \right| \leq \frac{3}{2} C_{\mathtt 4}^k \| {\delta}_{x} { \mathbf{U}}^{n-k}\|^2_{2}+\frac{1}{6} \left\| {\delta}_{t}{{ \mathbf{U}}}^{n+1} \right\|_{2}^2, \quad b_0 = {4}, b_1 = {1} \end{aligned} \end{equation} (3.23)

    and

    \begin{equation} \begin{aligned} h\langle {\mathcal A} {{ \mathbf{U}}}^{n+1}, {\delta}_{t}{ \mathbf{U}}^{n+1} \rangle & = -\left(\left \| {\delta}_{x} {{ \mathbf{U}}}^{n+1}\right\|^2_{2}-\langle {\delta}_{x} {{ \mathbf{U}}}^{n+1} ,\delta_{x} {{ \mathbf{U}}}^{n}\rangle \right) - \frac{\triangle x^2}{12}\left(\left\| {\delta}_{x} ^2{{ \mathbf{U}}}^{n+1}\right\|^2_{2} -\langle {\delta}_{x} ^2 {{ \mathbf{U}}}^{n+1} , {\delta}_{x}^2 {{ \mathbf{U}}}^{n}\rangle\right)\\ &\leq \frac{1}{2} \left\|{\delta}_x {{ \mathbf{U}}}^{n}\right\|^2_{2^*}-\frac{1}{2}\left\|{\delta}_x {{ \mathbf{U}}}^{n+1}\right\|^2_{2^*}, \end{aligned} \end{equation} (3.24)

    where \left\|{\delta}_x {{ \mathbf{U}}}^{m}\right\|^2_{2^*}: = \left\|{\delta}_x {{ \mathbf{U}}}^{m}\right\|^2_2+ \frac{\triangle x^2}{12}\left\|{\delta}_x^2 {{ \mathbf{U}}}^{m}\right\|^2_2 and C_{\mathtt 4}^k (k = 0, 1) are the constants in Lemma 3.3. Young's inequality applying on the first term of the right-hand side of (3.22) provides

    \begin{equation} \begin{aligned} & \langle \delta_t { \mathbf{U}}^{n },\delta_t { \mathbf{U}}^{n+1}\rangle \leq \frac{1}{2} \| \delta_t{ \mathbf{U}}^{n }\|^2_2+\frac{1}{2}\|\delta_t { \mathbf{U}}^{n+1}\|^2_2, \end{aligned} \end{equation} (3.25)

    one can rearrange Eq (3.22) using the bounds of (3.23) and (3.24), so that

    \begin{equation} \frac{1}{2} \left\| {\delta}_t {{ \mathbf{U}}}^{n+1 } \right\|^2_{2} + \frac{\nu}{3 h} \left\|{\delta}_x {{ \mathbf{U}}}^{n+1}\right\|^2_{2^*} \leq \frac{1}{6} \left\| {\delta}_t {{ \mathbf{U}}}^{n } \right\|^2_{2}+\frac{ \nu}{3h} \left\|{\delta}_x {{ \mathbf{U}}}^{n}\right\|^2_{2^*} + C_{{\mathtt{ 4}}}\sum\limits_{k = 0,1}\left\|{\delta}_x {{ \mathbf{U}}}^{n-k}\right\|^2_2, \end{equation} (3.26)

    where {C_{\mathtt{4}}: = \frac{3}{2} \max \left\{C^0_{\mathtt 4}, C^1_{\mathtt 4}\right\}. } Summing on both sides of (3.26) from n = 1 to N-1 yields

    \begin{equation} \begin{aligned} \frac{1}{3}\sum\limits_{n = 1}^{N} \left\| {\delta}_t {{ \mathbf{U}}}^{n } \right\|^2_{2} + \frac{\nu}{3h}\left\|{\delta}_x {{ \mathbf{U}}}^{N}\right\|^2_{2^*} \leq & \frac{ \nu}{3h} \left\|{\delta}_x {{ \mathbf{U}}}^{0}\right\|^2_{2^*} +2 C_{\mathtt{4}}\sum\limits_{n = 0}^{N-1} \left\|{\delta}_x {{ \mathbf{U}}}^{n}\right\|^2_2. \end{aligned} \end{equation} (3.27)

    Note that \left\| {{ \mathbf{U}}}^{N} \right\|_{2}\leq \left\| {{ \mathbf{U}}}^{0} \right\|_{2}+h\sum\limits_{n = 1}^{N} \left\| {\delta}_t {{ \mathbf{U}}}^{n } \right\|_{2} , so that \left\| {{ \mathbf{U}}}^{N} \right\|^2_{2}\leq 2\left\|{{ \mathbf{U}}}^{0} \right\|^2_{2}+2Nh^2\sum_{n = 1}^{N} \left\| {\delta}_t {{ \mathbf{U}}}^{n } \right\|^2_{2} . Multiplying by 6hT in (3.27) and applying the previous lower bound for \sum_{n = 1}^{N} \left\| {\delta}_t {{ \mathbf{U}}}^{n } \right\|^2_{2} , we obtain

    \begin{equation} \begin{aligned} \left\|{{ \mathbf{U}}}^{N }\right\|^2_{2}+2\nu T \left\|{\delta}_x {{ \mathbf{U}}}^{N}\right\|^2_{2^*} \leq & 2\left\|{{ \mathbf{U}}}^{0 } \right\|^2_{2} + 2{\nu}T \left\|{\delta}_x {{ \mathbf{U}}}^{0}\right\|^2_{2^*} +12 C_{\mathtt{4}} hT \sum\limits_{n = 0}^{N-1} \left\|{\delta}_x {{ \mathbf{U}}}^{n}\right\|^2_2. \end{aligned} \end{equation} (3.28)

    Dividing (3.28) by 2\nu T induces that

    \begin{equation*} \begin{aligned} \left\| {\delta}_x {{ \mathbf{U}}}^{N } \right\|^2_{2}&\leq \frac{1}{\nu T}\left\|{{ \mathbf{U}}}^{0 } \right\|^2_{2} + \left\|{\delta}_x {{ \mathbf{U}}}^{0}\right\|^2_{2^*} + 6 C_{\mathtt{4}} \frac{h}{\nu} \sum\limits_{n = 0}^{N-1} \left\|{\delta}_x {{ \mathbf{U}}}^{n}\right\|^2_2. \end{aligned} \end{equation*}

    Hence, from the discrete Grönwall's lemma we conclude that

    \begin{equation*} \left\| \delta_x {{ \mathbf{U}}}^{N } \right\|^2_{2} \leq e^{{6C_{\mathtt{4}}{h}N/{\nu}}} \left( \frac{1}{\nu T}\left\|{{ \mathbf{U}}}^{0 } \right\|^2_{2}+\left\|{\delta}_x {{ \mathbf{U}}}^{0}\right\|^2_{2^*} \right) , \end{equation*}

    which implies

    \begin{equation} \begin{aligned} \sum\limits_{n = 0}^{N-1} \left\|{\delta}_x {{ \mathbf{U}}}^{n}\right\|^2_2 \leq C_{\mathtt{4}^*} \left( \frac{1}{\nu T}\left\|{{ \mathbf{U}}}^{0 } \right\|^2_{2}+\left\|{\delta}_x {{ \mathbf{U}}}^{0}\right\|^2_{2^*} \right), \end{aligned} \end{equation} (3.29)

    where C_{\mathtt{4}^*} satisfies

    C_{\mathtt{4}^*}: = \sum\limits_{n = 0}^{N-1} \exp\left({\frac{6C_{\mathtt{4}}{h}n}{\nu}}\right)\leq N\exp\left({\frac{6C_{\mathtt{4}}T}{\nu}}\right) .

    Again, applying (3.29) to (3.28), we have

    \begin{equation*} \begin{aligned} \left\|{{ \mathbf{U}}}^{N }\right\|^2_{2} \leq& 2\left\|{{ \mathbf{U}}}^{0 } \right\|^2_{2}+ 2{\nu}T \left\|{\delta}_x {{ \mathbf{U}}}^{0}\right\|^2_{2^*}+12 C_{\mathtt{4}}T^2 e^{\frac{6C_{\mathtt{4}}T}{\nu}} \left(\frac{1}{\nu T}\left\|{{ \mathbf{U}}}^{0 } \right\|^2_{2}+\left\|{\delta}_x {{ \mathbf{U}}}^{0}\right\|^2_{2^*}\right). \end{aligned} \end{equation*}

    We finally complete the proof by choosing C_{s}^2 = \max\left\{ 2+\frac{12}{\nu T} C_{\mathtt{4}}T^2 e^{\frac{6C_{\mathtt{4}}T}{\nu}}, 2\nu T+{12} C_{\mathtt{4}}T^2 e^{\frac{6C_{\mathtt{4}}T}{\nu}} \right\} .

    Remark 3.9. (1) The stability analysis is performed to obtain the generic constant C_{s} , which is independent of \triangle x and h , using neither the induction hypothesis nor mesh restrictions. Therefore, the result says that the numerical algorithm is unconditionally stable, which is often numerically asserted. We will present numerical experiments to support this stability analysis.

    (2) This proof procedure offers a more general approach to achieving the stability of the BSLM than the previous stability presentation in [22], so that the obtained results can be applied to the BSLM in Dirichlet problems.

    This section provides several experiments with two simple problems to support the theoretical convergence result and the numerical stability in Theorems 3.6 and 3.8, respectively. To verify the order of convergence of the proposed scheme, we exhibit rate-varying spatial mesh sizes and temporal step sizes for varying viscosity \nu : the temporal convergence order, the spatial convergence order, and the reciprocal term. For the second-order temporal convergence, we perform numerical tests with different sizes of h for a fixed sufficiently small size of \triangle x , so that the error is not much affected by the terms \triangle x^2 and {\triangle x^{p+1}}/{h} , and calculate the rate of temporal convergence (RTC) via the formula

    \log_2 \Bigl( \|\mathbf{e}^N(h)\|_2/\|\mathbf{e}^N(h/2)\|_2 \Bigr).

    Here, \|\mathbf{e}^N(h)\|_2 denotes the discrete \ell_2 -norm error corresponding to the temporal grid size h at the end time t_N . To investigate the order of spatial convergence from the two terms \triangle x^4 and \triangle x^{p+1}/h , we restrict the relation between two step sizes such as h = \triangle x^2 using the quintic Lagrange interpolation, p = 5 , in order that

    \begin{equation} \mathcal{O}\left(h^2 +\triangle x^4 + \frac{\triangle x^6}{h}\right) = \mathcal{O}(\triangle x^4). \end{equation} (4.1)

    Reducing spatial and temporal step sizes by half under the restricted relation, we compute the rate of spatial convergence (RSC) via the formula

    \log_2\Bigl( \|\mathbf{e}^N(\triangle x)\|_2/\|\mathbf{e}^N(\triangle x/2)\|_2 \Bigr),

    where \mathbf{e}^N(\triangle x) represents the error corresponding to the spatial grid size \triangle x at the end time t_N . Finally, to investigate the non-monotonic property of the term \triangle x^{p+1}/h , we apply three different orders p = 1, 2, 3 of Lagrangian interpolation. For each interpolation order, we demonstrate the error and its ratio by decreasing the temporal step sizes by half for a fixed spatial step size to determine the behavior of the fractional term involving temporal step size h based on the analytical results already obtained.

    Example 4.1. As the first experiment, we consider the Burgers' equation of form

    \begin{equation*} u_{t}+u u_{x} = \nu u_{xx},\quad x\in (-1, 1),\quad t\in(0,1] \end{equation*}

    with the initial condition given by

    u^0(x) = -\sin( \pi x).

    With the help of a Cole–Hopf transform, the periodic analytic solution [39] is given by

    u(x,t) = -\frac{ \int\limits_{ -\infty}^{\infty} \sin\left( \pi(x-\sqrt{4\nu t}s)\right) \exp \left( \frac{-\cos ( \pi (x-\sqrt{4\nu t}s)) }{2 \nu \pi} \right) \exp(-s^2) ds } { \int\limits_{ -\infty}^{\infty} \exp\left( \frac{-\cos ( \pi (x-\sqrt{4\nu t}s))}{2\nu \pi }\right) \exp(-s^2)ds}.

    We have numerically integrated to estimate the analytic solution using Gauss–Hermite quadrature with 200 nodes for the integral form. For the first test, we experiment with the errors and the ratio of the changes while varying temporal step sizes h = 1/2^k, k = 6, 7, \cdots, 10 with a fixed small spatial step size \triangle x = 1/2^{11} \leq h in the different viscosity coefficients \nu = 0.01, 0.1, 1.0 . It is easily noted that h^2 is the dominant term of the error, since h^2 > \triangle x^4/h > \triangle x^4 for \triangle x < h \ll 1 , so the error \| \mathbf{e}^N\|_2 is dominated by \mathcal{O}(h^2) . Thus, we can check the temporal order of convergence for the method by halving temporal step sizes under the condition \triangle x < h \ll 1 . The numerical results for the errors and their ratios are displayed for the different viscosities in Table 1. The figures for RTC in the table guarantee that the method has the required second-order temporal convergence rate.

    Table 1.  Errors and RTC with cubic Lagrangian interpolation and \triangle x = {1}/{2^{11}} .
    h \nu=0.01 \nu=0.1 \nu=1.0
    \| \mathbf{e}^N\|_2 RTC \| \mathbf{e}^N\|_2 RTC \| \mathbf{e}^N\|_2 RTC
    1/2^6 2.52\times 10^{-3} - 5.55\times 10^{-5} - 4.32\times 10^{-6} -
    1/2^7 7.58\times 10^{-4} 1.73 1.36 \times 10^{-5} 2.03 1.04\times 10^{-6} 2.05
    1/2^8 2.06\times 10^{-4} 1.88 3.35\times 10^{-6} 2.02 2.56 \times 10^{-7} 2.05
    1/2^9 5.38 \times 10^{-5} 1.94 8.32\times 10^{-7} 2.01 6.37\times 10^{-8} 2.01
    1/2^{10} 1.37\times 10^{-5} 1.97 2.07\times 10^{-7} 2.00 1.71\times 10^{-8} 1.90

     | Show Table
    DownLoad: CSV

    To check the order of the spatial convergence we perform a test problem with three different viscosity coefficients \nu = 0.01, 0.5, 1.0 , by varying step sizes under the relation h = \triangle x^2 such as (h, \triangle x) = (2^{-2k}, 2^{-k}) , k = 1, 2, \cdots, 9 . The numerical results are displayed in Table 2. The figures show that the method has an almost fourth-order spatial convergence since each term in the error expression has fourth-order convergence under the relation h = \triangle x^2 as seen in (4.1). This supports the theoretical order of convergence analyzed in Theorem 3.6.

    Table 2.  Errors with quintic Lagrangian interpolation for (h, \triangle x) = (4^{-k}, 2^{-k}) under h = \triangle x^2 .
    \nu=0.01 \nu=0.5 \nu=1.0
    (4^{-k}, 2^{-k}) \| \mathbf{e}^N\|_{2} Rate (4^{-k}, 2^{-k}) \| \mathbf{e}^N\|_{2} Rate (4^{-k}, 2^{-k}) \| \mathbf{e}^N\|_{2} Rate
    (4^{-5}, 2^{-5}) 8.05\times 10^{-3} - (4^{-1}, 2^{-1}) 2.28\times 10^{-2} - (4^{-2}, 2^{-2}) 6.29\times 10^{-5} -
    (4^{-6}, 2^{-6}) 7.05\times 10^{-4} 3.51 (4^{-2}, 2^{-2}) 1.15 \times 10^{-3} 4.31 (4^{-3}, 2^{-3}) 4.24\times 10^{-6} 3.89
    (4^{-7}, 2^{-7}) 3.72 \times 10^{-5} 4.24 (4^{-3}, 2^{-3}) 7.29\times 10^{-5} 3.98 (4^{-4}, 2^{-4}) 2.54\times 10^{-7} 4.06
    (4^{-8}, 2^{-8}) 2.04 \times 10^{-6} 4.19 (4^{-4}, 2^{-4}) 5.30 \times 10^{-6} 3.78 (4^{-5}, 2^{-5}) 1.60\times 10^{-8} 3.99
    (4^{-9}, 2^{-9}) 1.26\times 10^{-7} 4.01 (4^{-5}, 2^{-5}) 4.32\times 10^{-7} 3.62 (4^{-6}, 2^{-6}) 1.05\times 10^{-9} 3.94

     | Show Table
    DownLoad: CSV

    Finally, in order to check the reciprocal term of the temporal step size \mathcal{O}({\triangle x^{p+1}}/{h}) , we apply Lagrange interpolations with different orders p = 1, 2, 3 . It can be seen that the term {\triangle x^{p+1}}/{h} dominates the error as soon as h < \triangle x^{(p+1)/3} \ll 1 , since

    \frac{\triangle x^{p+1}}{h} = \frac{\triangle x^{\frac{p+1}{3}}}{h} \triangle x^{\frac{2(p+1)}{3}} \geq \triangle x^{\frac{2(p+1)}{3}} > \begin{cases} h^2 \\ \triangle x^4 \end{cases} \text{ for } p = 1,2,3.

    We display the behavior of the error for each different interpolation order p = 1, 2, 3 in Figure 1 while varying h = 1/2^k, k = 3, 4, \cdots, 11 for a fixed spatial grid size \triangle x = 1/2^6, 1/2^{7} with viscosity \nu = 0.1, 0.01 . The horizontal and vertical axes are represented by -\ln(h) and \ln(\|\mathbf{e}^N(h)\|_2) , respectively. The behavior of the error confirms to a slop of approximately 2 when h \geq \triangle x^{(p+1)/3} , since \mathcal{O}(h^2) dominates the error, as seen in Figure 1. However, the error stops decreasing after the point at which the {\triangle x^{p+1}}/{h} term begins to dominate it. Moreover, we can see that the error is slowly increasing to certain levels as h is decreasing with h \leq \triangle x^{(p+1)/3} . This behavior comes from the term \mathcal{O}(1/h) and thus it is not useful for the accuracy of the scheme to choose a temporal step size such as h < \triangle x^{(p+1)/3} .

    Figure 1.  The error with respect to h for different interpolations.

    Example 4.2. This example is the unsteady two-dimensional Burgers' equation with periodic boundary condition described as

    \begin{equation*} \label{num:Ex5} u_{t}(t,x,y )+\vec{u}(t,x,y)\cdot \nabla u(t,x,y) = \nu\Delta u(t,x,y),\quad (x,y)\in (-1,1)\times(-1,1), \; t\in(0,1], \end{equation*}

    where \vec{u}(t, x, y) = [{u}(t, x, y), {u}(t, x, y)]^{\mathtt T} and the initial value is given as u(0, x, y) = -\sin\left(\pi y \right).

    In this example, we uniformly discretize the spatial domain given by \triangle x = \triangle y and present the resulting convergence orders in Theorem 3.6 as extended in the two-dimensional problem. With the lack of an exact solution for the problem, we compute reference solutions by setting (h, \triangle x) = ({1}/{2^{8}}, {1}/{2^{8}}) with cubic Lagrangian interpolation and (h, \triangle x) = ({1}/{2^{17}}, {1}/{2^{8}}) with quintic Lagrangian interpolation to estimate the temporal convergence and spatial convergence rates, respectively. To estimate the temporal order of convergence, we approach the test problem similarly to Example 1 by varying h = {1}/{ 2^{k}} , k = 1, 2, \cdots, 7 for a fixed small spatial grid size \triangle x = {1}/{2^{11}} , which guarantees that the two dependent spatial terms \triangle x^4 and \triangle x^{p+1}/h become sufficiently smaller than h^2 . The numerical results for three different viscosities \nu = 0.01, 0.1, 1.0 are displayed in Figure 2(a). The figures indicate that the method has an almost second-order convergence rate in time, which allows us to use a much larger temporal than spatial step size, guaranteeing the theoretical temporal order of convergence shown in Theorem 3.6 with good stability.

    Figure 2.  (a) Rate Temporal Convergence (RTC) with cubic Lagrangian interpolation and \triangle x = {1}/{2^{11}} . (b) Errors with quintic Lagrange interpolation and h = \triangle x ^2 .

    To check the spatial order of convergence, we reduce spatial step sizes by half based on the relation h = \triangle x^2 , for instance (h, \triangle x) = (1/2^{2k}, 1/2^{k}) , k = 1, 2, \cdots, 7 while applying the quintic Lagrange interpolation for different viscosity coefficients \nu = 0.01, 0.1, 1.0 , as displayed in Figure 2(b). The figure shows that the proposed scheme has a fourth-order spatial convergence rate, as seen in the previous example.

    In the next numerical experiment, we support the numerical stability result of Theorem 3.8 for the proposed BSLM by estimating the generic constant C_{s} in the theorem as well as showing the behavior of the numerical solution over a sufficiently long time T with no restriction of step sizes \triangle x and h . The constant C_{s} is computed as the ratio of the solution at between the initial time t_0 = 0 with its spatial differences and the final time T = 1 such as

    C_{s} = \frac{ \| \mathbf{U}^N\|_2}{ \left\|{{ \mathbf{U}}}^{0}\right\|_2+ \left\|{\delta}_x {{ \mathbf{U}}}^{0}\right\|_2+ \triangle x \left\|{\delta}^2_x {{ \mathbf{U}}}^{0}\right\|_2 }.

    We present the estimates of C_{s} while varying \triangle x = 1/2^k, \; k = 4, 5, \cdots, 10 for fixed h = 1/2^4 and while varying h = 1/2^k, \; k = 4, 5, \cdots, 10 for fixed \triangle x = 1/2^4 when \nu = 0.01, 0.05, 0.1, 0.5 , and 1 , which are plotted in Figure 3(a) and 3(b), respectively. The presented figures show that the constant C_{s} is independent of the sizes of \triangle x and h as claimed in Theorem 3.8. We next exhibit the behavior of the discrete \ell_2 -norm of the solution \mathbf{U}^n versus time t^n < T = 10 with different spatial step sizes \triangle x = 1/2^k, k = 1, 5, 9, 13 for fixed h = 1/2 when \nu = 0.01 and \nu = 1.0 , which are displayed in Figure 4, respectively. The experiments show that the solutions consistently decrease for a long time without blowing up under various spatial step sizes in both cases of \nu , even when h is 2^{12}( = 4096) times larger than \triangle x in the case k = 13 ; this provides strong evidence for the numerical stability of the scheme. Therefore, the experiment results guarantee that the numerical scheme of the proposed BSLM is stable without any restriction of the relation between mesh and temporal discretizations, as we claimed.

    Figure 3.  (a) Estimates of C_{s} for fixed h = 1/2^4 . (b) Estimates of C_{s} for fixed \triangle x = 1/2^4 .
    Figure 4.  (a) Estimates of \| \mathbf{U}^n\|_2 for 0\leq t^n \leq 10 when \nu = 0.01 . (b) Estimates of \| \mathbf{U}^n\|_2 for 0\leq t^n \leq 10 when \nu = 1.0 .

    We end the section with a comment about the performance of the solution with respect to the viscosity. It is well known that one of the difficulties for solving Burgers' equation occurs when a small viscosity \nu is applied, which generates solution curves steepen as a shock like discontinuity as time increases. This creates spurious oscillations called Gibbs phenomena near the shock front [18]. The design of the schemes to overcome this problem with sharp front solutions is one of the challenges [7].

    In Figure 5, we present the behaviors of the numerical solution obtained by the proposed BSLM when viscosity coefficients \nu = 1, 0.1, 0.01, 0.001 . The figures show the proposed scheme does not suffer from the sharp front phenomena up to the viscosity \nu = 0.001 . For more related results or comparisons of the proposed method, we refer the interested readers to [22,24,26,27,28] and references therein.

    Figure 5.  Behavior of the numerical solution u(t, x) when \nu = 1, 0.1, 0.01, 0.001 for h = 1/2^9 , \triangle x = 1/2^9 , p = 3.

    We have proposed a BSLM for solving Burgers' equation with the periodic boundary condition and have proved that the scheme has second-order and fourth-order convergence in time and space, respectively. In particular, the asymptotic expression possesses a reciprocal term of temporal step size that is inversely proportional to interpolation order. Furthermore, we have shown that the proposed BSLM is unconditionally stable, allowing the use of a large size of temporal step compared to spatial step size, with examples of supporting numerical experiments. The analysis of the model problem suggests an extension of linear advection–diffusion problems and high-dimensional model problems with no restriction due to boundary conditions using the same order of discretization, implying the cognate results for order of convergence. The paper contains a theoretical result, namely, the asymptotic order of convergence and numerical stability of the scheme with respect to the discrete \ell_2 -norm with varying viscosity coefficients. The overall theoretical order of convergence and numerical stability of the scheme are also numerically verified with examples.

    The first author (P.Kim) was supported by the R & D program through Korea Institute of Fusion Energy (KFE) funded by the Government funds, Republic of Korea (No. EN2241-3) and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2020R1A4A1018190).

    The authors declare no conflicts of interest.

    Lagrange interpolation and its properties

    In this appendix, we review the Lagrange interpolation polynomial used to analyze the convergence and stability of the proposed BSLM and also introduce its properties. We consider the domain I to be uniformly discretized by a spatial grid size \triangle x and a fixed number J as shown in (2.2).

    For a given periodic function w({x}) defined in I and each local cell I_j: = [x_{j-1}, x_j] , let {\mathfrak L}_{j}w({x}) be a j^{th} local Lagrangian interpolation polynomial of degree p\geq1 defined by

    \begin{equation} \begin{aligned} & {\mathfrak L}_{j}w({x}): = \sum\limits_{k\in {\Lambda}_j} w_k { L}_{j,k}(x),\quad x\in I_j,\\ & L_{j, k}( x) : = \prod\limits_{\substack{j_\ell\in {\Lambda}_j\backslash \{j_k\}}} \frac{x-{x}_{j_\ell} }{ x_{j_k} -x_{j_\ell} },\quad {\Lambda}_j: = \left\{ {j_\ell}: = j+\ell-\left[\frac{p+1}{2}\right],\quad 0\leq \ell \leq p \right\}, \end{aligned} \end{equation} (A.1)

    where w_k: = w(x_k) and [m] denotes the integer satisfying [m]\leq m < [m]+1 . For the interpolation polynomial near boundary points, we consider w_{J+ k} = w_{ k} and w_{-k} = w_{J- k} for the periodicity when J+k , -k \in {\Lambda}_j . Using the local interpolation {\mathfrak L}_{j}w(x) of (A.1), a sufficiently smooth function w(x) can be approximated as

    \begin{equation} w({x}) \approx {{\mathfrak L}} \mathbf{w}({x}): = \sum\limits_{1\leq j\leq J}\mathbf{1}_{{I}_{j}}({x}){{\mathfrak L}_{j}}w({x}), \quad \mathbf{w}: = [w_{1},\cdots,w_{J}]^{\mathtt T}, \end{equation} (A.2)

    and for x\in I_j , its truncation error (see [2] for detail) satisfies

    \begin{equation} \Bigl|w({x})- {{\mathfrak L}} \mathbf{w}({x})\Bigr| = \frac{ \big| \prod\limits_{j_\ell\in {\Lambda}_{j}} (x-x_{j_\ell})\big|}{(p+1)!} \big|w^{(p+1)}(\xi_j)\big| \leq \frac{\triangle x^{p+1}}{4(p+1)}\big\|w^{(p+1)}\big\|_{\infty} \end{equation} (A.3)

    for some \xi_j \in I_j , since |\prod_{{j_\ell\in {\Lambda}_{j} }} (x-x_{j_\ell})| \leq\frac{1}{4} {\triangle x ^{p+1}\left[\frac{p+1}{2}\right]! \left[\frac{p+2}{2}\right]!} for uniformly spaced. Here, \mathbf{1}_{{I}_{j}} denotes the indicator function and \|\cdot\|_{\infty} is the infinity norm. Note that the interpolation {\mathfrak L} \mathbf{w}({x}) is piece-wise continuous in I but in general not differentiable at the grid points. To overcome this non-differentiability of {\mathfrak L} \mathbf{w}({x}) for {x}\in{I}_{j} on the interface, we define a derivative \widetilde{D}_x {\mathfrak L} \mathbf{w}({x}) such as

    \begin{equation} {\widetilde{D}_x}({\mathfrak L} \mathbf{w}(x)): = \begin{cases} \frac{1}{2}\Big( {D_x}({\mathfrak L}_{j}w(x))+{D_x} ({\mathfrak L}_{j+1} w(x))\Big) & \text{if } x = x_j, \\ {D_x}({\mathfrak L}_{j}w(x)) & \text{otherwise.} \end{cases} \end{equation} (A.4)

    We note [27,Lemma 2.3] that for a sufficiently smooth function w({x}) defined on I_j , the derivative \widetilde D_x defined in (A.4) satisfies

    \begin{equation} {D_x }w(x)-{ \widetilde D_x}({\mathfrak L} \mathbf{w}(x)) = \mathcal{O}(\triangle x^{p}). \end{equation} (A.5)

    Lastly, we introduce some boundedness properties for the interpolation polynomial.

    Lemma A.1. The derivative of {\mathfrak L}_{j}w({x}) defined in (A.1) can be bounded by

    \bigl|{D_x}({\mathfrak L}_{j}w(x))\bigr|\leq d_1\sum\limits_{k\in {\Lambda'_j} } \bigl|\delta_x w_k\bigr|,\quad\delta_x w_k: = \frac{w_{k}-w_{k-1}}{\triangle x},

    where {\Lambda'_j}: = \{j_\ell\in \Lambda_j | 1\leq \ell\leq p \}, \; d_1: = d_0 p(p+1), \; d_0: = \max \limits_{x\in I_j} \Big(\prod\limits_{ \substack{k, \ell \in {\Lambda}_j\\ k\neq \ell} } \left| \frac{ x-x_k}{x_\ell-x_k}\right|\Big) . Furthermore, for x\in I_j , we have

    \Bigl|{D_x}({\mathfrak L}_{j}w({x}))\Bigr|\leq \frac{d_2}{\triangle x} \| \mathbf{w}\|^2, \quad \mathbf{w} = [w_1,\cdots,w_J]^{\mathtt T},

    where d_2: = 2d_0p^2(p+1) , and \|\cdot\| denotes the standard Euclidean L_2 -norm.

    Proof. First note that the Lagrange interpolation basis L_{j, k}(x) of degree p given by (A.1) satisfies

    \begin{equation} \sum\limits_{k \in {\Lambda}_j} L_{j,k}(x) = 1,\quad {D_x}\Big(\sum\limits_{k\in {\Lambda}_j}L_{j,k}(x)\Big) = 0. \end{equation} (A.6)

    Hence, using the property (A.6), one can easily check that

    \begin{equation} \begin{aligned} {D_x}({\mathfrak L}_{j} w({x}))& = \sum\limits_{k \in {\Lambda}_j} w_{k} {D_x}(L_{j,k}(x)) = \sum\limits_{k \in \Lambda'_j} \sum\limits_{\ell = j_1}^{k} {D_x} (L_{j,k}(x))\Bigl( w_{\ell} - w_{\ell-1} \Bigr). \end{aligned} \end{equation} (A.7)

    From the definition of (A.1), we have that for any {k \in {\Lambda}_j}

    \begin{equation} \begin{aligned} \Bigl|{D_x}(L_{j,k}(x))\Bigr|\leq & \sum\limits_{i \in {\Lambda}_j\backslash \{k\}}\left| \frac{ \prod\limits_{\ell\in {\Lambda}_j\backslash \{ i,k\}}(x-x_{\ell}) }{ \prod\limits_{\ell\in {\Lambda}_j\backslash \{ k\}} (x_k-x_\ell)}\right|\\ = & \sum\limits_{i \in {\Lambda}_j\backslash \{ k\}}\frac{1}{\left|x_k-x_i\right|}\prod\limits_{\ell \in {\Lambda}_j \backslash \{ i,k\}} \left| \frac{ x-x_\ell}{x_k-x_\ell}\right| \\ \leq & \frac{p}{\triangle x}d_0, \end{aligned} \end{equation} (A.8)

    and Eqs (A.7) and (A.8) imply

    \begin{aligned} \Bigl|{D_x}({\mathfrak L}_{j}w({ x}))\Bigr|& \leq d_0 p (p +1)\sum\limits_{k \in {\Lambda'_j}}\Bigl|\frac{ w_{k}-w_{k-1}}{\triangle x}\Bigr| \leq d_1 \sum\limits_{k \in {\Lambda'_j}} \Bigl|\frac{ w_{k}-w_{k-1}}{\triangle x}\Bigr|, \end{aligned}

    which proves the first assertion. For the second assertion, we apply the triangle inequality and the Cauchy-Schwartz inequality to obtain

    \begin{aligned} \Bigl|{D_x}{\mathfrak L}_{j} w({ x})\Bigr|& \leq \frac{d_0p(p+1)}{\triangle x} \sum\limits_{k \in \Lambda'_j}\Bigl(|w_{k}| +| w_{k-1}| \Bigr)\\ &\leq \frac{d_2}{\triangle x} \| \mathbf{w}\|^2,\\ \end{aligned}

    which completes the proof.



    [1] P. Agarwal, J. Nieto, Some fractional integral formulas for the Mittag-Leffler type function with four parameters, Open Math., 13 (2015), 537-546.
    [2] P. Agarwal, S. V. Rogosin, J. J. Trujillo, Certain fractional integral operators and the generalized multi-index Mittag-Leffler functions, Proceedings-Mathematical Sciences, 125 (2015), 291-306. doi: 10.1007/s12044-015-0243-6
    [3] S. Ahmed, On the generalized fractional integrals of the generalized Mittag-Leffler function, Springer Plus, 3 (2014), 198.
    [4] D. Baleanu, P. Agarwal, S. D. Purohit, Certain fractional integral formulas inmvolving the product of generalized Bessel functions, The Scientific World Journal, 2013 (2013), 1-9.
    [5] A. Erdélyi, W. Magnus, F. Oberhettinger, et al. Higher Transcendental Functions, New York, 1953.
    [6] C. Fox, The G and H functions as symmetrical Fourier kernels, Trans. Amer. Math. Soc., 98 (1961), 395-429.
    [7] R. Hilfer, Applications of Fractional Calculus in Physics, World Scientific, Singapore, 2000.
    [8] M. Kamarujjama, O. Khan, Computation of new class of integrals involving generalized Galue type Struve function, J. Comput. Appl. Math., 351 (2019), 228-236. doi: 10.1016/j.cam.2018.11.014
    [9] M. Kamarujjama, N. U. Khan, O. Khan, The generalized p-k-Mittag-Leffler function and solution of fractional kinetic equations, J. Anal., 27 (2019), 1029-1046. doi: 10.1007/s41478-018-0160-z
    [10] M. Kamarujjama, N. U. Khan, O. Khan, et al. Extended type k-Mittag-Leffler function and its applications, Int. J. Appl. Comput. Math., 5 (2019), 72.
    [11] M. Kamarujjama, N. U. Khan, O. Khan, Fractional calculus of generalized p-k-Mittag-Leffler function using Marichev-Saigo-Maeda operators, Arab J. Math. Sci., 25 (2019), 156-168.
    [12] O. Khan, N. U. Khan, D. Baleanu, et al. Computable solution of fractional kinetic equations using Mathieu-type series, Adv. Differ. Equ-NY, 2019 (2019), 234.
    [13] A. A. Kilbas, M. Saigo, Fractional calculus of the H-function, Fukuoka Univ. Sci. Rep., 28 (1998), 41-51.
    [14] A. A. Kilbas, N. Sebastian, Generalized fractional integration of Bessel function of the first kind, Integr. Transf. Spec. F., 19 (2008), 869-883. doi: 10.1080/10652460802295978
    [15] A. A. Kilbas, H. M. Srivastava, J. J. Trujillo, Theory and Applications of Fractional Differential Equations, North-Holland Mathematics Studies, Elsevier Amsterdam, 2006.
    [16] V. Kiryakova, The special functions of fractional calculus as generalized fractional calculus operators of some basic functions, Comput. Math. Appl., 59 (2010), 128-141.
    [17] O. L. Marichev, Volterra equation of Mellin convolution type with a horn function in the kernel, Izvestiya Akademii Nauk BSSR Seriya Fiziko-Matematicheskikh Nauk, 1 (1974), 128-129.
    [18] V. N. Mishra, D. L. Suthar, S. D. Purohit, Marichev-Saigo-Maeda fractional calculus operators, Srivastava polynomials and generalized Mittag-Leffler function, Cogent Mathematics & Statistics, 4 (2107), 1320830.
    [19] K. S. Nisar, S. D Purohit, R. K. Parmar, Fractional calculus and certain integrals of generalized multi-index Bessel function, arXiv:1706.08039, 2017.
    [20] S. D. Purohit, D. L. Suthar, S. L. Kalla, Marichev Saigo Maeda fractional integration operators of Bessel, Matematiche (Catania), 61 (2012), 21-32.
    [21] M. Saigo, A remark on integral operators involving the gauss hypergeometric functions, Math. Rep. Coll. Gen. Educ. Kyushu Univ., 11 (1978), 135-143.
    [22] M. Saigo, N. Maeda, More generalization of fractional calculus. In: P. Rusev, I. Dimovski and V. Kiryakova (Eds.) Proceedings of the 2nd International Workshop on Transform Methods and Special Functions, Varna 1996, Institute of Mathematics and Informatics of the Bulgarian Academy of Sciences, Sofia, 1998.
    [23] R. K. Saxena, K. Nishimoto, N-fractional calculus of generalized Mittag-Leffler functions, J. Fract. Calc., 37 (2010), 43-52.
    [24] R. K. Saxena, T. K. Pogány, On fractional integration formulae for Aleph function, Appl. Math. Comput., 218 (2011), 985-990.
    [25] R. K. Saxena, J. Ram, D. Kumar, Generalized fractional integration of the product of Bessel functions of first kind, Proceeding of the 9th Annual Conference SSFA, 9 (2010), 15-27.
    [26] R. K. Saxena, M. Saigo, Certain properties of the fractional calculus associated with generalized Mittag-Leffler function, Fract. Calc. Appl. Anal., 8 (2005), 141-154.
    [27] H. M. Srivastava, A contour integral involving Fox's H-function, Indian J. Math., 14 (1972), 1-6.
    [28] H. M. Srivastava, M. Garg, Some integrals involving a general class of polynomials and the multivariable H-function, Rev. Roum. Phys., 32 (1987), 685-692.
    [29] H. M. Srivastava, P. W. Karlsson, Multiple Gaussian hypergeometric series, Ellis Horwood Chichester, 1985.
    [30] H. M. Srivastava, R. K. Saxena, Operators of fractional integration and their applications, Appl. Math. Comput., 118 (2001), 1-52.
    [31] D. L. Suthar, H. Hababenon, H. Tadesse, Generalized fractional calculus formulas for a product of Mittag-Leffler function and multivariable polynomials, Int. J. Appl. Comput. Math., 4 (2018), 1-12.
    [32] D. L. Suthar, S. D. Purohit, R. K. Parmar, Generalized fractional calculus of the multi-index Bessel function, Math. Nat. Sci., 1 (2017), 26-32.
    [33] E. M. Wright, The asymptotic expansion of the generalized hypergeometric function, J. Lond. Math. Soc., 10 (1935), 257-270.
  • This article has been cited by:

    1. Yuzi Jin, Soobin Kwak, Seokjun Ham, Junseok Kim, A fast and efficient numerical algorithm for image segmentation and denoising, 2024, 9, 2473-6988, 5015, 10.3934/math.2024243
    2. John Heine, Erin Fowler, Matthew B. Schabath, Fourier analysis of signal dependent noise images, 2024, 14, 2045-2322, 10.1038/s41598-024-78299-1
    3. Pedro B. Melo, Sílvio M. Duarte Queirós, Welles A. M. Morgado, Stochastic thermodynamics of Fisher information, 2025, 111, 2470-0045, 10.1103/PhysRevE.111.014101
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4671) PDF downloads(570) Cited by(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog