Loading [MathJax]/jax/element/mml/optable/MathOperators.js
Research article

Error estimate and superconvergence of a high-accuracy difference scheme for 2D heat equation with nonlocal boundary conditions

  • Received: 29 July 2024 Revised: 05 September 2024 Accepted: 11 September 2024 Published: 26 September 2024
  • MSC : 65M06, 65M12, 65T50

  • In this work, we initially construct an implicit Euler difference scheme for a two-dimensional heat problem, incorporating both local and nonlocal boundary conditions. Subsequently, we harness the power of the discrete Fourier transform and develop an innovative transformation technique to rigorously demonstrate that our scheme attains the asymptotic optimal error estimate in the maximum norm. Furthermore, we derive a series of approximation formulas for the partial derivatives of the solution along the two spatial dimensions, meticulously proving that each of these formulations possesses superconvergence properties. Lastly, to validate our theoretical findings, we present two comprehensive numerical experiments, showcasing the efficiency and accuracy of our approach.

    Citation: Liping Zhou, Yumei Yan, Ying Liu. Error estimate and superconvergence of a high-accuracy difference scheme for 2D heat equation with nonlocal boundary conditions[J]. AIMS Mathematics, 2024, 9(10): 27848-27870. doi: 10.3934/math.20241352

    Related Papers:

    [1] Murat A. Sultanov, Vladimir E. Misilov, Makhmud A. Sadybekov . Numerical method for solving the subdiffusion differential equation with nonlocal boundary conditions. AIMS Mathematics, 2024, 9(12): 36385-36404. doi: 10.3934/math.20241726
    [2] Adisorn Kittisopaporn, Pattrawut Chansangiam . Approximate solutions of the 2D space-time fractional diffusion equation via a gradient-descent iterative algorithm with Grünwald-Letnikov approximation. AIMS Mathematics, 2022, 7(5): 8471-8490. doi: 10.3934/math.2022472
    [3] Qian Ge, Jin Li . Extrapolation methods for solving the hypersingular integral equation of the first kind. AIMS Mathematics, 2025, 10(2): 2829-2853. doi: 10.3934/math.2025132
    [4] V. Raja, E. Sekar, S. Shanmuga Priya, B. Unyong . Fitted mesh method for singularly perturbed fourth order differential equation of convection diffusion type with integral boundary condition. AIMS Mathematics, 2023, 8(7): 16691-16707. doi: 10.3934/math.2023853
    [5] Essam R. El-Zahar, Ghaliah F. Al-Boqami, Haifa S. Al-Juaydi . Piecewise approximate analytical solutions of high-order reaction-diffusion singular perturbation problems with boundary and interior layers. AIMS Mathematics, 2024, 9(6): 15671-15698. doi: 10.3934/math.2024756
    [6] Jinghong Liu . W1,-seminorm superconvergence of the block finite element method for the five-dimensional Poisson equation. AIMS Mathematics, 2023, 8(12): 31092-31103. doi: 10.3934/math.20231591
    [7] Jinghong Liu, Qiyong Li . Pointwise superconvergence of block finite elements for the three-dimensional variable coefficient elliptic equation. AIMS Mathematics, 2024, 9(10): 28611-28622. doi: 10.3934/math.20241388
    [8] Yaoyuan Zhang, Lihe Wang . Pricing perpetual timer options under Heston Model by finite difference method: Theory and implementation. AIMS Mathematics, 2023, 8(7): 14978-14996. doi: 10.3934/math.2023764
    [9] Xintian Pan . A high-accuracy conservative numerical scheme for the generalized nonlinear Schrödinger equation with wave operator. AIMS Mathematics, 2024, 9(10): 27388-27402. doi: 10.3934/math.20241330
    [10] Taghread Ghannam Alharbi, Abdulghani Alharbi . Traveling-wave and numerical investigations to nonlinear equations via modern computational techniques. AIMS Mathematics, 2024, 9(5): 12188-12210. doi: 10.3934/math.2024595
  • In this work, we initially construct an implicit Euler difference scheme for a two-dimensional heat problem, incorporating both local and nonlocal boundary conditions. Subsequently, we harness the power of the discrete Fourier transform and develop an innovative transformation technique to rigorously demonstrate that our scheme attains the asymptotic optimal error estimate in the maximum norm. Furthermore, we derive a series of approximation formulas for the partial derivatives of the solution along the two spatial dimensions, meticulously proving that each of these formulations possesses superconvergence properties. Lastly, to validate our theoretical findings, we present two comprehensive numerical experiments, showcasing the efficiency and accuracy of our approach.



    In recent years, nonclassical boundary and initial-boundary value problems have garnered significant attention across diverse disciplines such as physics, biology, ecology, chemistry, and beyond. Among these, parabolic partial differential equations (PDEs) with nonlocal initial and/or boundary conditions have emerged as powerful tools for modeling a wide array of phenomena. These include, but are not limited to, heat conduction[1], thermoelasticity[2], biotechnology[3], electrochemistry[4], population dynamics [5], and petroleum exploration [6]. The incorporation of nonlocal conditions into these PDEs allows for a more nuanced and realistic representation of the complex interactions and dynamics at play within these systems.

    Let QT=Ω×I be the computational domain, where Ω=(0,1)2 and I=(0,T) represent the spatial domain and the time domain, respectively, and T is a positive constant. Here, we consider the following 2D parabolic problem to find a high-accuracy numerical scheme and obtain its theoretical error estimates:

    ut=a2Δu+f(x,y,t),(x,y)Ω,t(0,T], (1.1)

    which is subject to the initial conditions

    u|t=0=g(x,y),(x,y)Ω, (1.2)

    the Dirichlet boundary conditions

    u|x=0=μ1(y,t),y(0,1),t(0,T], (1.3)
    u|x=1=μ2(y,t),y(0,1),t(0,T], (1.4)

    and the nonlocal boundary conditions

    u|y=0=u|y=1+μ3(x,t),x(0,1),t(0,T], (1.5)
    uy|y=0=μ4(x,t),x(0,1),t(0,T], (1.6)

    where u(x,y,t) is the unknown function, g(x,y), μi(y,t) (i=1,2) and μj(x,t) (j=3,4) are known functions, and a is a positive constant.

    These two nonlocal boundary conditions (1.5) and (1.6) are often be used to describe the correlation of a physical quantity across two parallel boundaries in a physical system, as well as the situation where the normal derivative at the boundaries is controlled by external factors, which is commonly used to simulate the interactions between boundaries and boundary effects in processes such as heat conduction and fluid flow.

    If the exact solution u of problems (1.1)–(1.6) satisfies certain smootheness conditions, then the compatible condition is deduced as follows: (x,y)Ω, the following relations hold:

    g(0,y)=μ1(y,0),g(1,y)=μ2(y,0),g(x,0)=g(x,1)+μ3(x,0),gy(x,0)=μ4(x,0).

    The analytical frameworks and numerical techniques employed in tackling parabolic problems with nonlocal conditions have aroused the concern of many scholars. Pertaining to the crucial aspects of convergence and stability for such problems, we acknowledge the foundational work presented in [7,8,9], as well as the extensive references cited therein. Among the prevalent numerical methodologies, finite difference methods (FDM) stand out prominently, with notable contributions from studies such as [7,10,11,12,13]. Additionally, finite element methods (FEM) have garnered substantial attention, exemplified by works cited in [14,15]. Furthermore, the realm of numerical solutions encompasses innovative approaches like Adomian expansions [16], the local coordinates method [17], and the utilization of reproducing kernel spaces [18], each offering unique insights and advancements in this field.

    It is widely acknowledged that two-dimensional parabolic partial differential equations (PDEs), characterized by their two spatial variables, pose significant challenges for theoretical analysis, particularly in the realms of convergence analysis and error estimation. The dimensionality of these variables often complicates the mathematical treatment, necessitating innovative strategies. One promising approach to mitigate these difficulties is the utilization of the discrete fourier transform (DFT) method, which offers advantages in reducing the complexity of self-variables during convergence analysis. In this study, we build upon our previous work [19,20] by extending the numerical schemes and integrating the DFT method on the spatial variable x for error estimation within the context of a two-dimensional parabolic PDE subject to a nonlocal boundary condition.

    However, a major obstacle arises from the complex boundary condition imposed on the spatial variable y. This condition presents a challenge to traditional DFT methods, which are inherently designed to preserve some boundary conditions. To overcome this limitation, we propose a novel transformation tailored specifically to handle this periodic boundary scenario. Furthermore, we contribute by deriving formulas for the solution derivatives and rigorously proving that these formulas enable us to achieve optimal asymptotic error estimates in the maximum norm. This achievement underscores the effectiveness and applicability of our proposed methodology in accurately approximating and analyzing solutions to two-dimensional parabolic PDEs with intricate boundary conditions.

    This paper is organized as follows. In Section 2, the backward Euler difference scheme for the solution of problems (1.1)–(1.6) is presented. Then, in Section 3, we utilize the DFT and develop a new transformation to analyze the error estimate for the corresponding difference equation. The superconvergence for the derivative and its theoretical results are also considered. Finally, some numerical experiments are presented in Section 5.

    Now, we use the FDM to discretize problems (1.1)–(1.6). The domain ¯QT is discretized by the uniformly distributed grid points (xi,yj,tn), where

    xi=ih,i=0,1,,2N,2Nh=1,yj=jh,j=0,1,,2N,2Nh=1,tn=nτ,n=0,1,,M,Mτ=T,

    where τ is time stepsize, and h is space stepsize along both x and y directions.

    Define a function space by

    Cm(¯QT)={s1+s2+s3uxs1ys2ts3C(¯QT)|s1+s2+s3m},

    and its norm by

    um,=maxs1+s2+s3m{|s1+s2+s3uxs1ys2ts3|},(x,y,t)¯QT,

    where m and si (i=1,2,3) are given nonnegative integers.

    The key to seeking a numerical solution for problems (1.1)–(1.6) lies in how to discretize the nonlocal boundary conditions (1.6). Suppose uC4(¯QT), using the Taylor formula, we have

    u(x,h,t)=u(x,0,t)+huy(x,0,t)+h22uyy(x,0,t)+h33!uyyy(x,0,t)+O(h4). (2.1)

    Using (1.1), we have

    uyy(x,0,t)=1a2ut(x,0,t)uxx(x,0)1a2f(x,0,t). (2.2)

    Moreover, we obtain

    uyyy(x,0,t)=1a2uty(x,0,t)uxxy(x,0,t)1a2fy(x,0,t).

    Therefore, with (1.6), we obtain

    uyyy(x,0,t)=1a2(μ4)t(x,t)(μ4)xx(x,t)1a2fy(x,0,t). (2.3)

    Substituting (1.6), (2.2), and (2.3) into (2.1), we have

    u(x,h,t)=u(x,0,t)+hμ4(x,t)+h22(1a2ut(x,0,t)uxx(x,0,t)1a2f(x,0))+h33!(1a2(μ4)t(x,t)(μ4)xx(x,t)1a2fy(x,0,t))+O(h4),

    i.e.

    ut(x,0,t)=2a2h2(u(x,h,t)u(x,0,t))+a2uxx(x,0,t)+˜μ4(x,t)+O(h2), (2.4)

    where

    ˜μ4(x,t)=f(x,0,t)2a2hμ4(x,t)h3((μ4)t(x,t)a2(μ4)xx(x,t)fy(x,0,t)). (2.5)

    From the derivation process described above, the discretization of (1.6) is converted to discretizing (2.4).

    Let uni,j and Uni,j be the exact value and the approximation of u(x,y,t) at grid point (xi,yj,tn), respectively. Let fni,j=f(xi,yj,tn), gi,j=g(xi,yj), (μm)nj=μm(yj,tn) (m=1,2), (μ3)ni=μ3(xi,tn) and (˜μ4)ni=˜μ4(xi,tn).

    Then, (2.4) is approximated by the following difference equations:

    Uni,0Un1i,0τ=2a2h2(Uni,1Uni,0)+a2Uni+1,02Uni,0+Uni1,0h2+2(μ4)nih+(˜μ4)ni,i=1,2,,2N1. (2.6)

    Also, we obtain the difference equations of (1.1)

    Uni,jUn1i,jτ=a2(Uni1,j2Uni,j+Uni+1,jh2+Uni,j12Uni,j+Uni,j+1h2)+fni,j,i,j=1,2,,2N1,n=1,2,,M. (2.7)

    Let αni,0 be the local truncature error of (2.6). When uC4(¯QT), using the Taylor formula, we can easily deduce that

    |αni,0|

    Similarly, when u\in C^4(\overline{Q}_T) , it holds that

    \begin{align*} \left\vert{\alpha^n_{i, j}}\right\vert \lesssim \left\Vert{u}\right\Vert_{4, \infty}(\tau + h^2) \lesssim \tau + h^2, \quad i = 1, 2, \cdots, 2N-1, \quad j = 1, 2, \cdots, 2N-1, \end{align*}

    where \alpha^n_{i, j} ​ is the local truncation error of (2.7).

    Moreover, we obtain

    \begin{align} \left\vert{\alpha^n_{i, j}}\right\vert\lesssim \tau+h^2, \; \; i = 1, 2, \cdots, 2N-1, \; \; j = 0, 1, \cdots, 2N-1. \end{align} (2.8)

    From the above, we obtain the backward Euler difference scheme of problems (1.1)–(1.6).

    \begin{aligned} \frac{U^n_{i, j}-U^{n-1}_{i, j}}{\tau} = a^2 \left( \frac{U^n_{i-1, j}-2U^n_{i, j}+U^n_{i+1, j}}{h^2} +\frac{U^n_{i, j-1}-2U^n_{i, j}+U^n_{i, j+1}}{h^2} \right) +f^n_{i, j}, \\ \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; i, j = 1, 2, \cdots, {2N}-1, \; \; n = 1, 2, \cdots, M, \end{aligned} (2.9a)
    \begin{aligned} \;\;U^0_{i, j} = g_{i, j}, \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; i, j = 0, 1, \cdots, {2N}, \end{aligned} (2.9b)
    \begin{aligned} \;U^n_{0, j} = (\mu_1)^n_j, \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; j = 0, 1, \cdots, {2N}, \; \; n = 1, 2, \cdots, M, \end{aligned} (2.9c)
    \begin{aligned} \;U^n_{{2N}, j} = (\mu_2)^n_j, \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; j = 0, 1, \cdots, {2N}, \; \; n = 1, 2, \cdots, M, \end{aligned} (2.9d)
    \begin{aligned} U^n_{i, 0} = U^n_{i, {2N}}+(\mu_3)^n_i, \; \; \; \; \; i = 1, 2, \cdots, {2N}-1, \; \; n = 1, 2, \cdots, M, \end{aligned} (2.9e)
    \begin{array}{l} \frac{U^n_{i, 0}-U^{n-1}_{i, 0}}{\tau} = \frac{2a^2}{h^2}(U^n_{i, 1}-U^n_{i, 0})+ \frac{a^2}{h^2}(U^n_{i-1, 0}-2U^n_{i, 0}+U^n_{i+1, 0}) + (\widetilde{\mu}_4)^n_i, \\ \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;\; \; \; \;\; \; \;\; \;\; i = 1, 2, \cdots, {2N}-1, \; \; n = 1, 2, \cdots, M. \end{array} (2.9f)

    Let e^n_{i, j} = u^n_{i, j}-U^n_{i, j} be the error of the approximation solution U at the grid point (x_i, y_j, t_n) , and \mu = \frac{\tau}{h^2} be the grid ratio. Then, the error equations of (2.9a)–(2.9f) are

    \begin{aligned} {e^n_{i, j}-e^{n-1}_{i, j}} = a^2 \mu \left( {e^n_{i-1, j}+e^n_{i+1, j}} +{e^n_{i, j-1}+e^n_{i, j+1}-4e^n_{i, j}} \right) +\tau\alpha^n_{i, j}, \\ \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; i, j = 1, \cdots, {2N}-1, \; \; n = 1, 2, \cdots, M, \end{aligned} (3.1a)
    \begin{aligned} \;\;e^0_{i, j} = 0, \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; i, j = 0, 1, \cdots, {2N}, \end{aligned} (3.1b)
    \begin{aligned} \;e^n_{0, j} = 0, \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; j = 0, 1, \cdots, {2N}, \; \; n = 1, 2, \cdots, M, \end{aligned} (3.1c)
    \begin{aligned} \;e^n_{{2N}, j} = 0, \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; j = 0, 1, \cdots, {2N}, \; \; n = 1, 2, \cdots, M, \end{aligned} (3.1d)
    \begin{aligned} e^n_{i, 0} = e^n_{i, {2N}}, \; \; \; \; \; \; \; i = 1, 2, \cdots, {2N}-1, \; \; n = 1, 2, \cdots, M, \end{aligned} (3.1e)
    \begin{array}{l} e^n_{i, 0}-e^{n-1}_{i, 0} = 2a^2\mu(e^n_{i, 1}-e^n_{i, 0})+a^2\mu(e^n_{i-1, 0}-2e^n_{i, 0}+e^n_{i+1, 0})+\tau\alpha^n_{i, 0}, \\ \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;\; \; i = 1, 2, \cdots, {2N}-1, \; \; n = 1, 2, \cdots, M. \end{array} (3.1f)

    Given the complexity of the above error equations, the key to obtaining an error estimate lies in finding transformations that separate the index variables i , j , and n .

    Since the error sequence \{{e}^n_{i, j}\} satisfies (3.1c) and (3.1d), applying the DFT to \{e^n_{i, j}\} with respect to i , we obtain

    \begin{align} {e}^n_{i, j} = \sqrt{2h}\sum\limits_{k = 1}^{{2N}-1}\widehat{e}^n_{k, j} \sin{(k\pi x_i)}, \; \; i, j = 0, 1, \cdots, {2N}. \end{align} (3.2)

    Similarly, applying the DFT to \{{\alpha}^n_{i, j}\} with respect to i , we obtain

    \begin{align} {\alpha}^n_{i, j} = \sqrt{2h}\sum\limits_{k = 1}^{{2N}-1} \widehat{\alpha}^n_{k, j} \sin{(k \pi x_i)}, \; \; i = 1, 2, \cdots, {2N}-1, \; \; j = 0, 1, \cdots, {2N}-1. \end{align} (3.3)

    It follows from (2.8) and (3.3) that

    \begin{align} \left\vert{\widehat{\alpha}^n_{k, j}}\right\vert \lesssim \frac{\tau+h^2}{h^{\frac{1}{2}}}, \; \; k = 1, 2, \cdots, 2N-1, \; \; j = 0, 1, \cdots, 2N-1. \end{align} (3.4)

    Substituting (3.2) and (3.3) into (3.1a), we obtain

    \begin{align} &\sqrt{2h}\sum\limits^{2N-1}_{k = 1} (\widehat{e}^n_{k, j} - \widehat{e}^{n-1}_{k, j}) \sin{(k\pi x_i)} \\ & = \sqrt{2h}a^2\mu \sum\limits^{2N-1}_{k = 1} \left(\widehat{e}^n_{k, j}(\sin{(k\pi x_{i-1})}-2\sin{(k\pi x_{i})}+\sin{(k\pi x_{i+1})})\right. \\ &\left. +(\widehat{e}^n_{k, j-1}-2\widehat{e}^n_{k, j}+\widehat{e}^n_{k, j+1})\sin{(k\pi x_{i})} \right)+ \sqrt{2h}\tau\sum\limits_{k = 1}^{{2N}-1} \widehat{\alpha}^n_{k, j} \sin{(k \pi x_i)}, \\ &\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; i, j = 1, 2, \cdots, 2N-1. \end{align} (3.5)

    Utilizing the properties of the DFT and (3.5), we obtain

    \begin{align} \widehat{e}^n_{k, j} - \widehat{e}^{n-1}_{k, j} = a^2\mu\left(\widehat{e}^n_{k, j-1}-\left(2+4\sin^2{\frac{k \pi h}{2}}\right)\widehat{e}^n_{k, j}+\widehat{e}^n_{k, j+1}\right)+ \tau\widehat{\alpha}^n_{k, j}, \\ \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; j = 1, 2, \cdots, {2N}-1. \end{align} (3.6)

    Similarly, substituting (3.2) and (3.3) into (3.1f), we have

    \begin{align} \widehat{e}^n_{k, 0} - \widehat{e}^{n-1}_{k, 0} = 2a^2\mu(\widehat{e}^n_{k, 1}-\widehat{e}^n_{k, 0})-4a^2\mu\sin^2{\frac{k\pi h}{2}}\widehat{e}^n_{k, 0}+\tau \widehat{\alpha}^n_{k, 0}. \end{align} (3.7)

    Substituting (3.2) into (3.1b) and (3.1e), we deduce that

    \begin{align} \widehat{e}^0_{k, j} = 0, \; \; j = 0, 1, \cdots, {2N}, \end{align} (3.8)

    and

    \begin{align} \widehat{e}^n_{k, 0} = \widehat{e}^n_{k, {2N}}. \end{align} (3.9)

    Given that the sequence \{\widehat{e}^n_{k, j}\} adheres to the condition specified in (3.9), the conventional DFT is found to be inadequate for our analytical needs. In pursuit of a suitable tool for analysis, we aspire for a novel transformation that not only fulfills the criteria outlined in (3.8) but also possesses the property of invertibility. Drawing inspiration from the formulation of the DFT, we introduce a fresh transformation tailored specifically for the sequence \{\widehat{e}^n_{k, j}\} with respect to j in the following way, aiming to address the aforementioned limitations and meet our analytical needs.

    \begin{align} \widehat{e}^n_{k, j} = \sum\limits^{2N-1}_{l = 0} \widetilde{e}^{n}_{k, l}T_l(y_j), \; \; j = 0, 1, \cdots, 2N-1, \end{align} (3.10)

    where

    \begin{align} T_l(y) = \begin{cases} \cos{(2l\pi y)}, & l = 0, 1, \cdots, N, \\ y\sin{(2l\pi y)}, & l = N+1, N+2, \cdots, 2N-1. \end{cases} \end{align} (3.11)

    It is straightforward to verify Lemma 3.1.

    Lemma 3.1. The sequence \{T_l(y_j)\} has the following properties.

    (1) T_l(y_0) = \begin{cases} 1, & l = 0, 1, \cdots, N, \\ 0, & l = N+1, N+2, \cdots, 2N-1. \end{cases}

    (2) T_l(y_1)-T_l(y_0) = \begin{cases} (\cos{(2l\pi h)}-1)T_l(y_0), & l = 0, 1, \cdots, N, \\ h\sin{(2l\pi h)}, & l = N+1, N+2, \cdots, 2N-1. \end{cases}

    (3) For any 0\leq l, j\leq 2N-1 ,

    \begin{align*} &T_l(y_{j-1})-2T_l(y_j)+T_l(y_{j+1})\\ & = \begin{cases} 2(\cos{(2l\pi h)}-1)T_l(y_j), & l = 0, 1, \cdots, N, \\ 2(\cos{(2l\pi h)}-1)T_l(y_j)+2hT_{2N-l}(y_j)\sin{(2l\pi h)}, & l = N+1, N+2, \cdots, 2N-1. \end{cases} \end{align*}

    (4) T_l(y_{2N-j}) = \begin{cases} \cos{(2l\pi y_j)}, & l = 0, 1, \cdots, N, \\ (y_j-1)\sin{(2l\pi y_j)}, & l = N+1, N+2, \cdots, 2N-1. \end{cases}

    For the sake of simplicity in the subsequent analysis, we introduce

    \begin{align} P_l(y): = \begin{cases} \cos{(2l\pi y)}, & l = 0, 1, \cdots, N, \\ \sin{(2l\pi y)}, & l = N+1, N+2, \cdots, 2N-1, \end{cases} \end{align} (3.12)

    and consider the orthogonality relation of the polynomials P_l(y_i) and P_l(y_j) .

    Lemma 3.2. Given that i, j = 0, \cdots, N , we have the following identity:

    \begin{align} \sum\limits_{l = 0}^{N} \sigma_l P_l(y_i) P_l(y_j) = \begin{cases} \frac{N}{2\sigma_i}, & i = j, \\ 0, & i \neq j, \end{cases} \end{align} (3.13)

    where

    \begin{align} \sigma_l = \begin{cases} \frac{1}{2}, & l = 0, N, \\ 1, & \mathit{\mbox{otherwise}}. \end{cases} \end{align} (3.14)

    Proof. Using (3.14) and (3.12), and noticing 2Nh = 1 and y_i = ih , we have

    \begin{align} \sum\limits^N_{l = 0}\sigma_l P_l(y_i)P_l(y_j) & = \sum\limits^{N-1}_{l = 1} P_l(y_i)P_l(y_j)+\frac{1}{2}\sum\limits_{l = 0, N}P_l(y_i)P_l(y_j) \\ & = \sum\limits^{N-1}_{l = 1}\cos{(2l\pi y_i)}\cos{(2l\pi y_j)} + \frac{1}{2}(1+\cos{(2N\pi y_i)}\cos{(2N\pi y_j)}) \\ & = \frac{1}{2}\sum\limits^{N-1}_{l = 1}\cos{(2l\pi y_{i+j})} +\frac{1}{2}\sum\limits^{N-1}_{l = 1}\cos{(2l\pi y_{i-j})} + \frac{1+(-1)^{i+j}}{2}. \end{align} (3.15)

    For 0\leq m\leq 2N , we have

    \begin{align*} 2\sum\limits^{N-1}_{l = 1}\cos{(2l\pi y_m)} & = \sum\limits^{N-1}_{l = 1}(\cos{(2(l-1)\pi y_m)}+\cos{(2(l+1)\pi y_m)})-1+\cos{(2(N-1)\pi y_m)} \nonumber\\ &\quad +\cos{(2\pi y_m)}-\cos{(2N\pi y_m)} \nonumber\\ & = 2\cos{(2\pi y_m)}\sum\limits^{N-1}_{l = 1}\cos{(2l\pi y_m)} + (1+(-1)^m)(\cos{(2\pi y_m)}-1), \end{align*}

    i.e.,

    \begin{align*} 2(1-\cos{(2l\pi y_m)})\sum\limits^{N-1}_{l = 1}\cos{(2l\pi y_m)} = (1+(-1)^m)(\cos{(2\pi y_m)}-1). \end{align*}

    If \cos{(2l\pi y_m)}\neq 1 , then

    \begin{align} \sum\limits^{N-1}_{l = 1}\cos{(2l\pi y_m)} = - \frac{1+(-1)^m}{2}. \end{align} (3.16)

    Now we focus on the case i\neq j . Since 0\leq i, j\leq N , it follows that 0 < y_{i+j} < 1 and 0 < \left\vert{y_{i-j}}\right\vert < 1 . Furthermore, when l ranges from 1 to N−1 , we observe that \cos{(2l\pi y_{i-j})} \neq 1 and \cos{(2l\pi y_{i+j})}\neq 1 .

    Thus, with (3.16), and observing the same parity of i+j and i-j , we obtain

    \begin{align*} \sum\limits^{N-1}_{l = 1}\cos{(2l\pi y_{i+j})} = \sum\limits^{N-1}_{l = 1}\cos{(2l\pi y_{i-j})} = - \frac{1+(-1)^{i+j}}{2}. \end{align*}

    Substituting the above equality into (3.15), we deduce that

    \begin{align} \sum\limits^N_{l = 0}\sigma_l P_l(y_i)P_l(y_j) = 0, \; \; i\neq j. \end{align} (3.17)

    In the next, we consider the case i = j . Noting that when 1\leq l\leq N-1 , \cos{(2l\pi y_{2i})} is equal to 1 only in i = 0 and i = N . Therefore, by utilizing (3.16), we obtain

    \begin{align} \sum\limits^{N-1}_{l = 1}\cos{(2l\pi y_{2i})} = \begin{cases} -1, & i = 1, 2, \cdots, N-1, \\ N-1, & i = 0, N. \end{cases} \end{align} (3.18)

    Substituting (3.18) into (3.15), we deduce that

    \begin{align} \sum\limits^N_{l = 0}\sigma_l P_l(y_i)P_l(y_j) = \begin{cases} \frac{N}{2}, & i = 1, 2, \cdots, N-1, \\ N, & i = 0, N. \end{cases} \end{align} (3.19)

    Furthermore, with (3.14), (3.17), and (3.19), we arrive at the conclusion stated in (3.13).

    Similar to Lemma 3.2, we can derive the subsequent lemma as well.

    Lemma 3.3. Given that i, j = N+1, \cdots, 2N-1 , we have the following identity

    \begin{align} \sum\limits^{2N-1}_{l = N+1} P_l(y_i)P_l(y_j) = \begin{cases} \frac{N}{2}, & i = j, \\ 0, & i\neq j. \end{cases} \end{align} (3.20)

    Therefore, we can conclude the following lemma.

    Lemma 3.4. Suppose

    \begin{align} a_i = \sum\limits^{2N-1}_{l = N+1} \widehat{a}_l P_l(y_i), \; \; i = N+1, N+2, \cdots, 2N-1. \end{align} (3.21)

    Then

    \begin{align} \widehat{a}_l = \frac{2}{N} \sum\limits^{2N-1}_{i = N+1} a_iP_l(y_i), \; \; l = N+1, N+2, \cdots, 2N-1. \end{align} (3.22)

    Proof. Using (3.21), (3.12), and Lemma 3.3, we obtain

    \begin{align*} \sum\limits^{2N-1}_{i = N+1}a_iP_l(y_i) & = \sum\limits^{2N-1}_{i = N+1}\sum\limits^{2N-1}_{m = N+1} \widehat{a}_m P_l(y_i)P_m(y_i) \\ & = \sum\limits^{2N-1}_{m = N+1} \widehat{a}_m \sum\limits^{2N-1}_{i = N+1}P_i(y_l)P_i(y_m)\\ & = \frac{N}{2}\widehat{a}_l. \end{align*}

    The proof is finished.

    Similar to Lemma 3.4, we obtain

    Lemma 3.5. Suppose

    \begin{align} a_i = \sum\limits^{N}_{l = 0} \widehat{a}_l P_l(y_i), \; \; i = 0, 1, \cdots, N. \end{align} (3.23)

    Then

    \begin{align} \widehat{a}_l = \frac{2\sigma_l}{N} \sum\limits^{N}_{i = 0} \sigma_i a_iP_l(y_i), \; \; l = 0, 1, \cdots, N. \end{align} (3.24)

    Based on Lemmas 3.4 and 3.5, we obtain the invertible transformation of (3.25).

    Lemma 3.6. Suppose

    \begin{align} \widehat{a}_j = \sum\limits^{2N-1}_{l = 0} \widetilde{a}_l T_l(y_j), \; \; j = 0, 1, \cdots, 2N-1. \end{align} (3.25)

    Then,

    \begin{align} \widetilde{a}_{l} = \begin{cases} \frac{2\sigma_l}{N}\sum\limits^{N}_{j = 0}\sigma_j ((1-y_j)\widehat{a}_{j}+y_j \widehat{a}^n_{2N-j})\cos{(2l\pi y_j)}, &l = 0, 1, \cdots, N, \\ \frac{2}{N}\sum\limits^{N}_{j = 0} (\widehat{a}_{j}- \widehat{a}_{2N-j})\sin{(2l\pi y_j)}, &l = N+1, N+2, \cdots, 2N-1, \end{cases} \end{align} (3.26)

    where

    \begin{align} \sigma_j = \begin{cases} \frac{1}{2}, & j = 0, N, \\ 1, & \mathit{\mbox{otherwise}}. \end{cases} \end{align} (3.27)

    Proof. Using (3.25), (3.12), and Lemma 3.1, we have

    \begin{align*} \widehat{a}_{2N-j} & = \sum\limits^N_{l = 0}\widetilde{a}_l P_l(y_j) + (y_j-1)\sum\limits^{2N-1}_{l = N+1}\widetilde{a}_l P_l(y_j), \end{align*}

    and

    \begin{align*} \widehat{a}_j & = \sum\limits^N_{l = 0}\widetilde{a}_l P_l(y_j) + y_j\sum\limits^{2N-1}_{l = N+1}\widetilde{a}_l P_l(y_j). \end{align*}

    From the two equalities above, it follows that

    \begin{align*} (1-y_j)\widehat{a}_j + y_j \widehat{a}_{2N-j} = \sum\limits^N_{l = 0}\widetilde{a}_l P_l(y_j) \end{align*}

    and

    \begin{align*} \widehat{a}_j - \widehat{a}_{2N-j} = \sum\limits^{2N-1}_{l = N+1}\widetilde{a}_l P_l(y_j). \end{align*}

    Moreover, using Lemmas 3.5 and 3.4, we arrive at the conclusion stated in (3.26).

    Similar to (3.10), we use the same transformation to \{\widehat{\alpha}^n_{k, j}\} with respect to j in the following way:

    \begin{align} \widehat{\alpha}^n_{k, j} = \sum\limits^{2N-1}_{l = 0} \widetilde{\alpha}^{n}_{k, l}T_l(y_j), \; \; j = 0, 1, \cdots, 2N-1. \end{align} (3.28)

    Using (3.4), (3.27), and Lemma 3.6, and noting that 0\leq y_j\leq 1 (j = 0, 1, \cdots, 2N) , we can deduce

    \begin{align} \left\vert{\widetilde{\alpha}^n_{k, l}}\right\vert \lesssim \frac{\tau+h^2}{h^{\frac{1}{2}}}, \; \; l = 0, 1, \cdots, 2N-1. \end{align} (3.29)

    Substituting (3.10) and (3.28) into (3.6), we obtain

    \begin{align} \sum\limits^{2N-1}_{l = 0} (\widetilde{e}^{n}_{k, l}-\widetilde{e}^{n-1}_{k, l})T_l(y_j) & = a^2\mu \sum\limits^{2N-1}_{l = 0} \widetilde{e}^{n}_{k, l}(T_l(y_{j-1})-(2+4\sin^2{\frac{k \pi h}{2}})T_l(y_j)+T_l(y_{j+1})) \\ & + \tau\sum\limits^{2N-1}_{l = 0} \widetilde{\alpha}^{n}_{k, l}T_l(y_j), \; \; j = 1, 2, \cdots, 2N-1. \end{align} (3.30)

    Using Lemma 1, (3.30) can be rewritten as

    \begin{align*} \sum\limits^{2N-1}_{l = 0}(\widetilde{e}^{n}_{k, l}-\widetilde{e}^{n-1}_{k, l})T_l(y_j) & = a^2\mu \left( \sum\limits^{2N-1}_{l = 0} \left(2(\cos{(2l\pi h)}-1)-4\sin^2{\frac{k \pi h}{2}}\right)\widetilde{e}^{n}_{k, l} T_l(y_j) \right. \nonumber\\ &\left. + 2h\sum\limits^{2N-1}_{l = N+1} \widetilde{e}^{n}_{k, l}T_{2N-l}(y_j)\sin{(2l\pi h)} \right) + \tau \sum\limits^{2N-1}_{l = 0}\widetilde{\alpha}^{n}_{k, l}T_l(y_j), \nonumber\\ &\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; j = 1, 2, \cdots, 2N-1. \end{align*}

    Let l: = 2N-l in \sum\limits^{2N-1}_{l = N+1} \widetilde{e}^{n}_{k, l}T_{2N-l}(y_j)\sin{(2l\pi h)} , the above equalities have the following form:

    \begin{align} \sum\limits^{2N-1}_{l = 0}(\widetilde{e}^{n}_{k, l}-\widetilde{e}^{n-1}_{k, l})T_l(y_j) & = -2a^2\mu \left( 2\sum\limits^{2N-1}_{l = 0} \left(\sin^2{(l\pi h)}+\sin^2{\frac{k \pi h}{2}}\right)\widetilde{e}^{n}_{k, l} T_l(y_j) \right. \\ &\left. + h\sum\limits^{N-1}_{l = 1} \widetilde{e}^{n}_{k, 2N-l}T_{l}(y_j)\sin{(2l\pi h)} \right) + \tau \sum\limits^{2N-1}_{l = 0}\widetilde{\alpha}^{n}_{k, l}T_l(y_j), \\ &\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; j = 1, 2, \cdots, 2N-1. \end{align} (3.31)

    Substitute (3.10) into (3.7), then

    \begin{align*} \sum\limits^{2N-1}_{l = 0}(\widetilde{e}^n_{k, 0}-\widetilde{e}^{n-1}_{k, 0})T_l(y_0) & = 2a^2\mu\sum\limits^{2N-1}_{l = 0}\widetilde{e}^n_{k, l}(T_l(y_1)-T_l(y_0)) - 4a^2\mu \sin^2{\frac{k\pi h}{2}}\sum\limits^{2N-1}_{l = 0}\widetilde{e}^n_{k, l}T_l(y_0) \nonumber\\ &+\tau\sum\limits^{2N-1}_{l = 0}\widetilde{\alpha}^n_{k, l}T_l(y_0). \end{align*}

    Moreover, using Lemma 3.1, we obtain

    \begin{align*} &\sum\limits^{2N-1}_{l = 0}(\widetilde{e}^n_{k, 0}-\widetilde{e}^{n-1}_{k, 0})T_l(y_0)\\ & = -2a^2\mu \left( 2\sum\limits^{2N-1}_{l = 0} \left(\sin^2{(l\pi h)}+\sin^2{\frac{k \pi h}{2}}\right)\widetilde{e}^{n}_{k, l} T_l(y_0) + h\sum\limits^{N-1}_{l = 1} \widetilde{e}^{n}_{k, 2N-l}T_{l}(y_0)\sin{(2l\pi h)} \right) \nonumber\\ &+ \tau \sum\limits^{2N-1}_{l = 0}\widetilde{\alpha}^{n}_{k, l}T_l(y_0). \end{align*}

    Through comparing with the above equality and (3.31), we find that (3.31) also holds for j = 0 . Therefore,

    \begin{align} \sum\limits^{2N-1}_{l = 0}(\widetilde{e}^{n}_{k, l}-\widetilde{e}^{n-1}_{k, l})T_l(y_j) & = -2a^2\mu \left( 2\sum\limits^{2N-1}_{l = 0} \left(\sin^2{(l\pi h)}+\sin^2{\frac{k \pi h}{2}}\right)\widetilde{e}^{n}_{k, l} T_l(y_j) \right. \\ &\left. + h\sum\limits^{N-1}_{l = 1} \widetilde{e}^{n}_{k, 2N-l}T_{l}(y_j)\sin{(2l\pi h)} \right) + \tau \sum\limits^{2N-1}_{l = 0}\widetilde{\alpha}^{n}_{k, l}T_l(y_j), \\ &\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; j = 0, 1, \cdots, 2N-1. \end{align} (3.32)

    Using Lemma 3.6 to perform an invertible transformation on (3.32), we obtain

    \begin{align} &\widetilde{e}^{n}_{k, l}-\widetilde{e}^{n-1}_{k, l} = \\ & \begin{cases} -4a^2\mu \left(\sin^2{(l\pi h)}+\sin^2{\frac{k \pi h}{2}}\right)\widetilde{e}^{n}_{k, l} - 2a^2\mu h\widetilde{e}^{n}_{k, 2N-l}\sin{(2l\pi h)} + \tau\widetilde{\alpha}^{n}_{k, l}, \; \; l = 1, 2, \cdots, N-1, &\; \\ -4a^2\mu \left(\sin^2{(l\pi h)}+\sin^2{\frac{k \pi h}{2}}\right)\widetilde{e}^{n}_{k, l} + \tau\widetilde{\alpha}^{n}_{k, l}, \; \; \; \; l = 0, N, N+1, \cdots, 2N-1.&\\ \end{cases} \end{align} (3.33)

    Let

    \begin{align} \omega_{k, l} = \frac{1}{1+4a^2\mu \left(\sin^2{(l\pi h)}+\sin^2{\frac{k \pi h}{2}}\right)}. \end{align} (3.34)

    Obviously,

    \begin{align} 0 < \omega_{k, l} < 1. \end{align} (3.35)

    Using (3.34), (3.33) can be rewritten as

    \begin{align} \widetilde{e}^{n}_{k, l} & = \begin{cases} \omega_{k, l}\widetilde{e}^{n-1}_{k, l} - 2a^2\mu h\omega_{k, l}\widetilde{e}^{n}_{k, 2N-l}\sin{(2l\pi h)} + \tau\omega_{k, l}\widetilde{\alpha}^{n}_{k, l}, \; \; l = 1, 2, \cdots, N-1, &\\ \omega_{k, l}\widetilde{e}^{n-1}_{k, l} + \tau\omega_{k, l}\widetilde{\alpha}^{n}_{k, l}, \; \; \; \; \; \; \; \; \; \; \; l = 0, N, N+1, \cdots, 2N-1.&\\ \end{cases} \end{align} (3.36)

    Substituting (3.10) into (3.8), and using Lemma 3.6, we can easily deduce that

    \begin{align} \widetilde{e}^{n}_{k, l} = 0, \; \; l = 0, 1, \cdots, 2N-1. \end{align} (3.37)

    Using (3.36) and (3.37), we obtain the following recursive formula for \left\{\widetilde{e}^n_{k, l}\right\} :

    \begin{align} &\widetilde{e}^{n}_{k, l} = \\ & \begin{cases} -2a^2\mu h\sin{(2l\pi h)}\sum\limits^n_{m = 1}(\omega_{k, l})^{n-m+1}\widetilde{e}^m_{k, 2N-l} + \tau\sum\limits^n_{m = 1}(\omega_{k, l})^{n-m+1}\widetilde{\alpha}^m_{k, l}, \; \; l = 1, 2, \cdots, N-1, &\\ \tau\sum\limits^n_{m = 1}(\omega_{k, l})^{n-m+1}\widetilde{\alpha}^{m}_{k, l}, \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; l = 0, N, N+1, \cdots, 2N-1.&\\ \end{cases} \end{align} (3.38)

    In order to estimate \left\{\widetilde{e}^n_{k, l}\right\} , we first prove the following estimation

    \begin{align} \sum\limits^n_{m = 1}(\omega_{k, l})^{n-m+1} \lesssim \begin{cases} \frac{1}{\tau(l^2+k^2)}, & l = 0, 1, \cdots, N, \\ \frac{1}{\tau((2N-l)^2+k^2)}, & l = N+1, N+2, \cdots, 2N-1. \end{cases} \end{align} (3.39)

    In fact, from (3.34) and (3.35), we can derive that

    \begin{align} \sum\limits^n_{m = 1}(\omega_{k, l})^{n-m+1} & = \frac{\omega_{k, l}-(\omega_{k, l})^{n+1}}{1-\omega_{k, l}} \\ & = \frac{1-(\omega_{k, l})^{n}}{\mu (\sin^2{(l\pi h)}+\sin^2{\frac{k \pi h}{2}})} \\ &\leq \frac{1}{\mu (\sin^2{(l\pi h)}+\sin^2{\frac{k \pi h}{2}})}. \end{align} (3.40)

    For 0\leq l\leq N , we have l\pi h\in [0, \frac{\pi}{2}] . Observe that \frac{k\pi h}{2}\in (0, \frac{\pi}{2}) \; (1\leq k\leq 2N-1) . Therefore, (3.40) can be rewritten as

    \begin{align} \sum\limits^n_{m = 1}(\omega_{k, l})^{n-m+1} \lesssim \frac{1}{\mu h^2(4l^2+k^2)} \lesssim \frac{1}{\tau(l^2+k^2)}. \end{align} (3.41)

    For N\leq l\leq 2N-1 , from 2Nh = 1 and 0 < (2N-l)\pi h\leq \frac{\pi}{2} , we have

    \begin{align*} \sin{(l\pi h)} = \sin{(2(2N-l)\pi h)} \geq 2(2N-l) h . \end{align*}

    Moreover, (3.40) can be written as

    \begin{align} \sum\limits^n_{m = 1}(\omega_{k, l})^{n-m+1} \lesssim \frac{1}{\mu h^2(4(2N-l)^2+k^2)} \lesssim \frac{1}{\tau((2N-l)^2+k^2)}. \end{align} (3.42)

    Therefore, combining (3.41) with (3.42), (3.39) holds.

    Now, we give the estimation of \left\{\widetilde{e}^n_{k, l}\right\} in three cases.

    Case 1. N\leq l\leq 2N-1

    With (3.38), (3.35), (3.29), and (3.39), we have

    \begin{align} \left\vert{\widetilde{e}^n_{k, l}}\right\vert &\leq \tau\sum\limits^n_{m = 1}(\omega_{k, l})^{n-m+1}\left\vert{\widetilde{\alpha}^{n}_{k, l}}\right\vert \\ &\lesssim \frac{\tau(\tau+h^2)}{h^{\frac{1}{2}}} \sum\limits^n_{m = 1}(\omega_{k, l})^{n-m+1} \\ &\lesssim \frac{\tau+h^2}{h^{\frac{1}{2}} ((2N-l)^2+k^2)}. \end{align} (3.43)

    Case 2. 1\leq l\leq N-1

    Using (3.43), we obtain

    \begin{align} \left\vert{\widetilde{e}^n_{k, 2N-l}}\right\vert \lesssim \frac{\tau+h^2}{h^{\frac{1}{2}} (l^2+k^2)}. \end{align} (3.44)

    From the above inequality, and using (3.38), (3.35), (3.39), (3.29), and \mu = \frac{\tau}{h^2} , we obtain

    \begin{align} \left\vert{\widetilde{e}^n_{k, l}}\right\vert &\leq 2a^2\mu h\sin{(2l\pi h)}\sum\limits^n_{m = 1}(\omega_{k, l})^{n-m+1}\left\vert{\widetilde{e}^m_{k, 2N-l}}\right\vert + \tau\sum\limits^n_{m = 1}(\omega_{k, l})^{n-m+1} \left\vert{\widetilde{\alpha}^m_{k, l}}\right\vert \\ &\lesssim \left( \mu h\sin{(2l\pi h)} \cdot \frac{\tau+h^2}{h^{\frac{1}{2}} (l^2+k^2)} + \frac{\tau(\tau+h^2)}{h^{\frac{1}{2}}} \right) \sum\limits^n_{m = 1}(\omega_{k, l})^{n-m+1} \\ & \lesssim \frac{\tau+h^2}{h^{\frac{1}{2}}(l^2+k^2)} \left( \frac{\sin{(l\pi h)}}{h (l^2+k^2)}+ 1 \right) \\ & \lesssim \frac{\tau+h^2}{h^{\frac{1}{2}}(l^2+k^2)}. \end{align} (3.45)

    Case 3. l = 0

    Observing that \omega_{k, 0} = \frac{1}{1+4a^2\mu\sin^2{\frac{k\pi h}{2}}} , similar to deduce (3.43), we obtain

    \begin{align} \left\vert{\widetilde{e}^n_{k, 0}}\right\vert & \lesssim \frac{\tau+h^2}{k^2 h^{\frac{1}{2}}}. \end{align} (3.45 = 6)

    From (3.10), (3.43), (3.45), and (3.46), and noticing that T_l(y_j) is bounded for any 0\leq l, j\leq 2N-1 , we have

    \begin{align} \left\vert{\widehat{e}^n_{k, j}}\right\vert & \leq \sum\limits^{2N-1}_{l = 0} \left\vert{\widetilde{e}^{n}_{k, l}}\right\vert\left\vert{T_l(y_j)}\right\vert \\ & \lesssim \frac{\tau+h^2}{k^2h^{\frac{1}{2}}} \left( \frac{1}{k^2} +\sum\limits^{N-1}_{l = 1} \frac{1}{l^2+k^2} + \sum\limits^{2N-1}_{l = N} \frac{1}{(2N-l)^2+k^2} \right) \\ & \lesssim \frac{\tau+h^2}{h^{\frac{1}{2}}} \sum\limits^{N-1}_{l = 0}\frac{1}{l^2+k^2}. \end{align} (3.47)

    Furthermore, from (3.2), we obtain

    \begin{align} \left\vert{e^n_{i, j}}\right\vert &\leq \sqrt{2h}\sum\limits^{2N-1}_{k = 1}\left\vert{\widehat{e}^n_{k, j}}\right\vert \\ &\lesssim (\tau+h^2)\sum\limits^{2N-1}_{k = 1}\sum\limits^{N-1}_{l = 0}\frac{1}{l^2+k^2} \\ & \lesssim (\tau+h^2) \left( 3\sum\limits^{2N}_{k = 1}\frac{1}{k^2} + \sum\limits^{2N}_{k = 2}\sum\limits^{2N}_{l = 2}\frac{1}{l^2+k^2} \right) \\ &\lesssim (\tau+h^2) \sum\limits^{2N}_{k = 2}\sum\limits^{2N}_{l = 2}\frac{1}{l^2+k^2}. \end{align} (3.48)

    Since \frac{1}{x^2+y^2} is increasing monotonically with respect to variables x and y for x, y > 0 , respectively, it follows that

    \begin{align} \sum\limits^{2N}_{k = 2}\sum\limits^{2N}_{l = 2}\frac{1}{l^2+k^2} & = h^2\sum\limits^{2N}_{k = 2}\sum\limits^{2N}_{l = 2}\frac{1}{(lh)^2+(kh)^2} \\ &\leq \iint\limits_{\Omega_h}\frac{1}{x^2+y^2}dxdy \\ & < \int^{\frac{\pi}{2}}_0d\theta\int^{\sqrt{2}}_h \frac{dr}{r} \\ &\leq \frac{\pi}{2}\left\vert{\ln{h}}\right\vert, \end{align} (3.49)

    where \Omega_h = [h, 1]\times [h, 1] .

    Using (3.49) and (3.48), and noticing (3.1c)–(3.1e), we can obtain the following error estimation theorem.

    Theorem 3.1. Suppose u\in C^{4}(\overline{Q}_T) . For any postive integer 1\leq n \leq M , the following estimates for (2.9a)–(2.9f)

    \begin{align*} \left\vert{e^n_{i, j}}\right\vert \lesssim (\tau+h^2)\left\vert{\ln{h}}\right\vert, \; \; i, j = 0, 1, \cdots, 2N. \end{align*}

    hold.

    In this section, we present the approximation formulas for the partial derivatives of u with respect to two spatial variables, which exhibit superconvergence under certain smooth conditions.

    Let U_x and U_y be the approximation functions for the partial derivatives u_x and u_y , respectively. For any t_n ( 1\leq n\leq M ), we introduce the following approximation formulas for u_x and u_y at the grid point (x_i, y_j, t_n) , respectively:

    \begin{align} U_x(x_i, y_j, t_n) = \frac{U^n_{i+1, j}-U^n_{i-1, j}}{2h}, \; \; 1\leq i\leq 2N-1, \; \; 0\leq j\leq 2N, \end{align} (4.1)

    and

    \begin{align} U_y(x_i, y_j, t_n) = \frac{U^n_{i, j+1}-U^n_{i, j-1}}{2h}, \; \; 0\leq i\leq 2N, \; \; 1\leq j\leq 2N-1. \end{align} (4.2)

    Before exploring the superconvergence of (4.1) and (4.2), we first present the following lemma.

    Lemma 4.1. Suppose that the function p(x)\in C^1[0, 1] satisfies

    \begin{align} \max\limits_{x\in [0, 1]}\{\left\vert{p(x)}\right\vert, \left\vert{p'(x)}\right\vert\}\leq M, \end{align} (4.3)

    where M is a positive constant. If

    \begin{align} \widehat{p}_k = \sqrt{2h}\sum\limits^{2N-1}_{i = 1} p_i \sin{(i\pi x_k)}, \; \; i = 1, 2, \cdots, 2N-1, \end{align} (4.4)

    then

    \begin{align} \left\vert{\widehat{p}_k}\right\vert \leq \frac{M\pi}{kh^{\frac{1}{2}}} \lesssim \frac{1}{kh^{\frac{1}{2}}}. \end{align} (4.5)

    Proof. Let \theta_k = \frac{k\pi h}{2} . Obviously, \theta_k\in (0, \frac{\pi}{2}) . We can easily verify the following equality:

    \begin{align} &\sum\limits^{2N-1}_{i = 1} (p_{i-1}-2p_i+p_{i+1})\sin{(i\pi x_k)} \\ & = \sum\limits^{2N-1}_{i = 1} p_i(\sin{((i+1)\pi x_k)}-2\sin{(i\pi x_k)}+\sin{((i-1)\pi x_k})) \\ & + p_0\sin{(\pi x_k)} + p_{2N}\sin{((2N-1)\pi x_k)} \\ & = -4\sin^2{\theta_k}\sum\limits^{2N-1}_{i = 1} p_i \sin{(i\pi x_k)} + (p_0+(-1)^k p_{2N})\sin{(2\theta_k)}. \end{align} (4.6)

    Noting that

    \begin{align} &\sum\limits^{2N-1}_{i = 1} (p_i-p_{i-1})\sin{(i\pi x_k)} \\ & = \sum\limits^{2N-1}_{i = 1} \left( \int^{x_i}_{x_{i-1}}p'(x)\sin{(k\pi x)}dx + \int^{x_i}_{x_{i-1}}p'(x)(\sin{(k\pi x_i)}-\sin{(k\pi x)})dx \right) \\ & = \int^{x_{2N-1}}_0 p'(x)\sin{k\pi x}dx + 2\sum\limits^{2N-1}_{i = 1} \int^{x_i}_{x_{i-1}} p'(x)\cos{\frac{k\pi(x_i+x)}{2}}\sin{\frac{k\pi(x_i-x)}{2}}dx, \end{align} (4.7)

    the following equality is also verified:

    \begin{align} &{\sum\limits^{2N-1}_{i = 1}(p_{i+1}-p_{i})\sin{(i\pi x_k)}} \\ & = \int^1_{x_1} p'(x)\sin{(k\pi x)}dx + 2\sum\limits^{2N-1}_{i = 1} \int^{x_{i+1}}_{x_{i}} p'(x)\cos{\frac{k\pi(x_{i+1}+x)}{2}}\sin{\frac{k\pi(x_{i+1}-x)}{2}}dx. \end{align} (4.8)

    Subtracting (4.7) from (4.8), we have

    \begin{align*} {\sum\limits^{2N-1}_{i = 1} (p_{i-1}-2p_i+p_{i+1})\sin{(i\pi x_k)}} & = \int^1_{x_{2N-1}} p'(x)\sin{(k\pi x)}dx - \int^{x_1}_0 p'(x)\sin{(k\pi x)}dx \nonumber\\ &+2 \int^1_{x_{2N-1}} p'(x)\cos{\frac{k\pi(1+x)}{2}}\sin{\frac{k\pi(1-x)}{2}}dx \nonumber\\ &- 2\int^{x_1}_0 p'(x)\cos{\frac{k\pi(x_1+x)}{2}}\sin{\frac{k\pi(x_1-x)}{2}}dx. \end{align*}

    Thus, using (4.3), we obtain

    \begin{align} \left\vert{\sum\limits^{2N-1}_{i = 1} (p_{i-1}-2p_i+p_{i+1})\sin{(i\pi x_k)}}\right\vert \leq 6Mh. \end{align} (4.9)

    Using (4.6), (4.3), and (4.9), and noting \theta_k\in (0, \frac{\pi}{2}) , we obtain

    \begin{align*} &\left\vert{\sum\limits^{2N-1}_{i = 1}p_i\sin{(i\pi x_k)}}\right\vert\\ & = \frac{1}{4\sin^2{\theta_k}}\left\vert{\sum\limits^{2N-1}_{i = 1} (p_{i-1}-2p_i+p_{i+1})\sin{(i\pi x_k)}-(p_0+(-1)^k p_{2N})\sin{(2\theta_k)}}\right\vert\\ &\leq \frac{3Mh+2M\sin{\theta_k}}{2\sin^2{\theta_k}} \\ &\leq \frac{M\pi}{kh}. \end{align*}

    Therefore, the lemma is proved with (4.4).

    Next, we study the superconvergence of (4.1).

    Theorem 4.1. Suppose u\in C^{5}(\overline{Q}_T) . Then, for any integer 1\leq n\leq M ,

    \begin{align} \left\vert{U_x(x_i, y_j, t_n)- u_x(x_i, y_j, t_n)}\right\vert \lesssim (\tau+h^2)\left\vert{\ln{h}}\right\vert, \; i = 1, 2, \cdots, 2N-1, j = 0, 1, \cdots, 2N. \end{align} (4.10)

    Proof. From (4.1) and u\in C^{5}(\overline{Q}_T) , we obtain

    \begin{align*} U_x(x_i, y_j, t_n) = u_x(x_i, y_j, t_n) + \frac{e^n_{i+1, j}-e^n_{i-1, j}}{2h} + O(h^2). \end{align*}

    Thus, in order to prove this theorem, it suffices to prove the following inequality, i.e., for any given integer 1\leq n\leq M ,

    \begin{align} \left\vert{\frac{e^n_{i+1, j}-e^n_{i-1, j}}{2h}}\right\vert \lesssim (\tau+h^2)\left\vert{\ln{h}}\right\vert, \; i = 1, 2, \cdots, 2N-1, j = 0, 1, \cdots, 2N. \end{align} (4.11)

    From (3.2) and (3.10), we have

    \begin{align} \frac{e^n_{i+1, j}-e^n_{i-1, j}}{2h} & = \frac{2}{\sqrt{2h}}\sum\limits^{2N-1}_{k = 1} \widehat{e}^n_{k, j} \sin{(k\pi h)}\cos{(k\pi x_i)} \\ & = \frac{2}{\sqrt{2h}}\sum\limits^{2N-1}_{k = 1} \sum\limits^{2N-1}_{l = 0}\widetilde{e}^n_{k, l}T_l(y_j) \sin{(k\pi h)}\cos{(k\pi x_i)}. \end{align} (4.12)

    Since u\in C^5(\overline{Q}_T) , using Lemma 4.1, (2.8), and (3.3), we obtain

    \begin{align} \left\vert{\widehat{\alpha}^n_{k, j}}\right\vert\lesssim \frac{\tau+h^2}{kh^{\frac{1}{2}}}. \end{align} (4.13)

    Correspondingly, (3.29) is written as

    \begin{align} \left\vert{\widetilde{\alpha}^n_{k, l}}\right\vert \lesssim \frac{\tau+h^2}{kh^{\frac{1}{2}}}. \end{align} (4.14)

    For this, by modifying (3.43), (3.45), and (3.46), we obtain

    \begin{align} \left\vert{\widetilde{e}^n_{k, l}}\right\vert \lesssim \begin{cases} \frac{\tau+h^2}{kh^{\frac{1}{2}}(l^2+k^2)}, & 0\leq l\leq N, \\ \frac{\tau+h^2}{kh^{\frac{1}{2}}((2N-l)^2+k^2)}, & N+1\leq l\leq 2N-1. \end{cases} \end{align} (4.15)

    Using (4.12) and (4.15), and given \left\vert{T_l(y)}\right\vert\leq 1 for y\in [0, 1] , we obtain

    \begin{align} \left\vert{\frac{e^n_{i+1, j}-e^n_{i-1, j}}{2h}}\right\vert &\lesssim h^{\frac{1}{2}}\sum\limits^{2N-1}_{k = 1}k\sum\limits^{2N-1}_{l = 0} \left\vert{\widetilde{e}^n_{k, l}}\right\vert \\ &\leq (\tau+h^2)\sum\limits^{2N-1}_{k = 1} \left( \sum\limits^{N-1}_{l = 0}\frac{1}{l^2+k^2} + \sum\limits^{2N-1}_{l = N}\frac{1}{(2N-l)^2+k^2} \right) \\ & \lesssim (\tau+h^2) \sum\limits^{2N}_{k = 2}\sum\limits^{2N}_{l = 2}\frac{1}{l^2+k^2}. \end{align} (4.16)

    From this, using (3.49), we prove (4.11). Therefore, (4.10) holds.

    In the following, we discuss the superconvergence properties of (4.2).

    Theorem 4.2. Suppose u\in C^{5}(\overline{Q}_T) . Then, for any integer 1\leq n\leq M ,

    \begin{align} \left\vert{U_y(x_i, y_j, t_n)- u_y(x_i, y_j, t_n)}\right\vert \lesssim (\tau+h^2)\ln^2{h}, \; i = 0, 1, \cdots, 2N, j = 1, 2, \cdots, 2N-1, \end{align} (4.17)

    hold.

    Proof. From (4.2) and u\in C^{5}(\overline{Q}_T) , we obtain

    \begin{align} U_x(x_i, y_j, t_n) = u_x(x_i, y_j, t_n) + \frac{e^n_{i, j+1}-e^n_{i-1, j-1}}{2h} + O(h^2). \end{align} (4.18)

    From (4.18), in order to prove this theorem, we only need to prove that for any integer 1\leq n\leq M ,

    \begin{align} \left\vert{\frac{e^n_{i, j+1}-e^n_{i, j-1}}{2h}}\right\vert \lesssim (\tau+h^2)\ln^2{h}, \; i = 0, 1, \cdots, 2N, j = 1, 2, \cdots, 2N-1. \end{align} (4.19)

    Using (3.2) and (3.10), and noting that T_0(y_{j+1})-T_0(y_{j-1}) = 0 , we find that

    \begin{align} \frac{e^n_{i, j+1}-e^n_{i, j-1}}{2h} & = \frac{2}{\sqrt{2h}}\sum\limits^{2N-1}_{k = 1} (\widehat{e}^n_{k, j+1} -\widehat{e}^n_{k, j-1})\sin{(k\pi x_i)} \\ & = \frac{2}{\sqrt{2h}}\sum\limits^{2N-1}_{k = 1} \sum\limits^{2N-1}_{l = 0}\widetilde{e}^n_{k, l}(T_l(y_{j+1})-T_l(y_{j-1}))\sin{(k\pi x_i)} \\ & = \frac{2}{\sqrt{2h}}\sum\limits^{2N-1}_{k = 1} \sum\limits^{2N-1}_{l = 1}\widetilde{e}^n_{k, l}(T_l(y_{j+1})-T_l(y_{j-1}))\sin{(k\pi x_i)}. \end{align} (4.20)

    Using (3.11), when N\leq l\leq 2N-1 , we have

    \begin{align} \left\vert{T_l(y_{j+1})-T_l(y_j)}\right\vert & \leq 2\left\vert{y_j\cos{((y_{j+1}+y_j)l\pi)}\sin{(2l\pi h)}}\right\vert+h\left\vert{\sin{(2l\pi y_{j+1})}}\right\vert \\ & \lesssim \left\vert{\sin{((2N-l)\pi h)}}\right\vert + h \\ & \lesssim (2N-l)h. \end{align} (4.21)

    Furthermore, we also deduce that

    \begin{align} \left\vert{T_l(y_{j+1})-T_l(y_j)}\right\vert = 2\left\vert{\sin{((y_{j+1}+y_j)l\pi)}\sin{(2l\pi h)}}\right\vert \lesssim lh . \end{align} (4.22)

    Using (4.20), (4.15), (4.21), and (4.22), we obtain

    \begin{align} \left\vert{\frac{e^n_{i, j+1}-e^n_{i, j-1}}{2h}}\right\vert & \lesssim \frac{1}{h^{\frac{1}{2}}} \sum\limits^{2N-1}_{k = 1} \sum\limits^{2N-1}_{l = 1}\left\vert{\widetilde{e}^n_{k, l}}\right\vert\left\vert{T_l(y_{j+1})-T_l(y_{j-1})}\right\vert \\ & \lesssim \frac{\tau+h^2}{h} \sum\limits^{2N-1}_{k = 1} \left(\sum\limits^{N}_{l = 1}\frac{lh}{k(k^2+l^2)} +\sum\limits^{2N-1}_{l = N+1} \frac{(2N-l)h}{k(k^2+(2N-l)^2)} \right) \\ & \lesssim (\tau+h^2) \sum\limits^{2N-1}_{k = 1} \sum\limits^{N}_{l = 1}\frac{l}{k(k^2+l^2)} \\ & \lesssim (\tau+h^2) \left( \sum\limits^{2N}_{l = 1}\frac{l}{1+l^2} +\sum\limits^{2N}_{k = 1}\frac{1}{k(k^2+1)} +\sum\limits^{2N}_{k = 2} \sum\limits^{2N}_{l = 2}\frac{l}{k(k^2+l^2)} \right) \\ & \lesssim (\tau+h^2)\left(\left\vert{\ln{h}}\right\vert+\sum\limits^{2N}_{k = 2} \sum\limits^{2N}_{l = 2}\frac{l}{k(k^2+l^2)}\right). \end{align} (4.23)

    Upon observing

    \begin{align*} 2\sum\limits^{2N}_{k = 2} \sum\limits^{2N}_{l = 2}\frac{l}{k(k^2+l^2)} & = \sum\limits^{2N}_{k = 2} \sum\limits^{2N}_{l = 2} \left(\frac{l}{k(k^2+l^2)} + \frac{k}{l(k^2+l^2)}\right) \nonumber\\ & = \sum\limits^{2N}_{k = 2} \sum\limits^{2N}_{l = 2}\frac{1}{kl} \nonumber\\ & \leq \ln^2{h}, \end{align*}

    by substituting this result into (4.23), we obtain

    \begin{align*} \left\vert{\frac{e^n_{i, j+1}-e^n_{i, j-1}}{2h}}\right\vert \lesssim (\tau+h^2)\ln^2{h}. \end{align*}

    This result confirms (4.19), thereby establishing the validity of (4.17).

    In this section, we present two numerical examples to validate the theoretical results and investigate the efficiency and the superconvergence properties of the numerical schemes. Our aim is to demonstrate the practical implications of the theoretical findings and assess the performance of the proposed methods. Let

    \begin{align*} \|U-u\|_{\infty} &: = \max\limits_{\substack{1\leq n\leq M\\ 0\leq i, j\leq 2N}}\left\vert{U^n_{i, j}-u^n_{i, j}}\right\vert, \\ \; \; \|U_x-u_x\|_{\infty} &: = \max\limits_{\substack{1\leq n\leq M\\ 1\leq i \leq 2N-1\\0\leq j\leq 2N}}\left\vert{(U_x)^n_{i, j}-(u_x)^n_{i, j}}\right\vert, \\ \; \; \|U_y-u_y\|_{\infty} &: = \max\limits_{\substack{1\leq n\leq M\\ 0\leq i \leq 2N\\ 1\leq j\leq 2N-1}}\left\vert{(U_y)^n_{i, j}-(u_y)^n_{i, j}}\right\vert. \end{align*}

    Example 5.1. In (1.1)–(1.6), take

    \begin{align*} &a = 1, \; \; T = 1, \; \; f(x, y, t) = 0, \; \; g(x, y) = e^{x+y}, \; \; \mu_1(y, t) = e^{y+2t}, \\ & \mu_2(y, t) = e^{1+y+2t}, \; \; \mu_3(x, t) = e^{x+2t}(1-e), \; \; \mu_4(x, t) = e^{x+2t}. \end{align*}

    The exact solution is u = e^{x+y+2t} which can be easily verified.

    The results are reported in Tables 13. From Table 1, we can observe that in the cases of \tau = h^2 and \tau = h , the error \|U-u\|_{\infty} ​ is approximately of the order O(h^2) and O(h) , respectively. This observation verifies the correctness of Theorem 3.1.

    Table 1.  Error with respect to u in \tau = h and \tau = h^2 for Example 5.1.
    h \tau=h^2 \tau=h
    \left\Vert{U-u}\right\Vert_{\infty} ratio \left\Vert{U-u}\right\Vert_{\infty} ratio
    1/ 32 4.4377e-003 - 1.2941e-001 -
    1/ 64 1.1104e-003 4.00 6.5173e-002 1.99
    1/ 128 2.7768e-004 4.00 3.2701e-002 1.99
    1/ 256 6.9427e-005 4.00 1.6379e-002 2.00

     | Show Table
    DownLoad: CSV
    Table 2.  Error of u_x and u_y in \tau = h^2 for Example 5.1.
    h \tau \|U_x-u_x\|_{\infty} ratio \|U_y-u_y\|_{\infty} ratio
    1/32 1/ 32^2 1.7481e-002 - 7.0681e-003 -
    1/64 1/ 64^2 4.4901e-003 3.89 1.8960e-003 3.73
    1/128 1/ 128^2 1.1379e-003 3.95 5.0110e-004 3.78
    1/256 1/ 256^2 2.8640e-004 3.97 1.3025e-004 3.85

     | Show Table
    DownLoad: CSV
    Table 3.  Error of u_x and u_y in \tau = h for Example 5.1.
    h \tau \|U_x-u_x\|_{\infty} ratio \|U_y-u_y\|_{\infty} ratio
    1/32 1/ 32 6.1798e-001 - 1.7722e-001 -
    1/64 1/ 64 3.3247e-001 1.86 1.0037e-001 1.77
    1/128 1/ 128 1.7242e-001 1.93 5.3329e-002 1.88
    1/256 1/ 256 8.7795e-002 1.96 2.7478e-002 1.94

     | Show Table
    DownLoad: CSV

    Furthermore, from Tables 2 and 3, it is evident that when \tau = h^2 , both \|U_x-u_x\|_{\infty} and \|U_y-u_y\|_{\infty} are close to the order O(h^2) . On the other hand, when \tau = h , \|U_x-u_x\|_{\infty} and \|U_y-u_y\|_{\infty} approach the order O(h) . These findings support the theoretical expectations regarding the convergence rates of the spatial derivatives. Therefore, the correctness of Theorems 4.1 and 4.2 is verified.

    Example 5.2. In problems (1.1)–(1.6), take

    \begin{align*} &a = 1, \; \; T = 1, \; \; f(x, y, t) = 0, \; \; g(x, y) = (1+y)e^{x}, \; \; \mu_1(y, t) = (1+y)e^{t}, \; \; \\ & \mu_2(y, t) = (1+y)e^{1+t}, \; \; \mu_3(x, t) = -e^{x+t}, \; \; \mu_4(x, t) = e^{x+t}. \end{align*}

    It is easily verified that its exact solution is u = (1+y)e^{x+t} .

    Numerical results for Example 5.2 are reported in Tables 46. These results verify the correctness of Theorems 3.1, 4.1 and 4.2 again.

    Table 4.  Error with respect to u in \tau = h and \tau = h^2 for Example 5.2.
    h \tau=h^2 \tau=h
    \left\Vert{U-u}\right\Vert_{\infty} ratio \left\Vert{U-u}\right\Vert_{\infty} ratio
    1/ 32 4.2726e-004 - 1.1677e-002 -
    1/ 64 1.1104e-003 4.00 5.8545e-003 1.99
    1/ 128 1.0692e-004 4.00 2.9303e-003 2.00
    1/ 256 6.6840e-006 4.00 1.4660e-003 2.00

     | Show Table
    DownLoad: CSV
    Table 5.  Error of u_x and u_y in \tau = h^2 for Example 5.2.
    h \tau \|U_x-u_x\|_{\infty} ratio \|U_y-u_y\|_{\infty} ratio
    1/32 1/ 32^2 2.2208e-003 - 3.8223e-004 -
    1/64 1/ 64^2 5.6284e-004 3.95 1.0418e-004 3.67
    1/128 1/ 128^2 1.4169e-004 3.97 2.7176e-005 3.83
    1/256 1/ 256^2 3.5544e-005 3.99 6.9401e-006 3.92

     | Show Table
    DownLoad: CSV
    Table 6.  Error of u_x and u_y in \tau = h for Example 5.2.
    h \tau \|U_x-u_x\|_{\infty} ratio \|U_y-u_y\|_{\infty} ratio
    1/32 1/ 32 5.1907e-002 - 1.0436e-002 -
    1/64 1/ 64 2.7983e-002 1.85 5.7010e-003 1.83
    1/128 1/ 128 1.4518e-002 1.93 2.9780e-003 1.91
    1/256 1/ 256 7.3924e-003 1.96 1.5219e-003 1.96

     | Show Table
    DownLoad: CSV

    This work focuses on a heat conduction problem with nonlocal boundary conditions. We develop an implicit Euler scheme and demonstrate that it achieves asymptotic optimal order with the DFT. Furthermore, we introduce two approximation formulas that exhibit superapproximation for first-order partial derivatives along the x and y directions of the exact solution, respectively. In the future, we plan to extend this work to other difference schemes for parabolic problems with nonlocal boundary conditions, such as the explicit Euler scheme, the Crank-Nicolson scheme, and other schemes. Additionally, we aim to consider heat conduction problems with different nonlocal boundary conditions.

    Liping Zhou: Conceptualization, methodology, formal analysis, writing-original draft, validation; Yumei Yan: Editing, software; Ying Liu: Writing-review and editing. All authors contributed equally to the manuscript. All authors have read and approved the final version of the manuscript for publication.

    This work is partially supported by the National Natural Science Foundation of China (No.12101224), the Hunan Provincial Natural Science Foundation of China (No. 2022JJ30271, 2024JJ7203) and the Key Project of Hunan Provincial Education Department of China (No. 23A0577).

    The authors declare no conflicts of interest.



    [1] J. Martín-Vaquero, J. Vigo-Aguiar, On the numerical solution of the heat conduction equations subject to nonlocal conditions, Appl. Numer. Math., 59 (2009), 2507–2514. https://doi.org/10.1016/j.apnum.2009.05.007 doi: 10.1016/j.apnum.2009.05.007
    [2] M. Sapagovas, Z. Joksiene, On the stability of finite-difference schemes for parabolic equations subject to integral conditions with applications to thermoelasticity, Comput. Meth. Appl. Mat., 8 (2008), 362–373. https://doi.org/10.2478/cmam-2008-0026 doi: 10.2478/cmam-2008-0026
    [3] F. Ivanauskas, V. Laurinavi{\check{c}}ius, M. Sapagovas, Anatolij Ne{\check{c}}iporenko, Reaction–diffusion equation with nonlocal boundary condition subject to pid-controlled bioreactor, Nolinear Anal.-Model., 22 (2017), 261–272. https://doi.org/10.15388/NA.2017.2.8 doi: 10.15388/NA.2017.2.8
    [4] Y. S. Choi, K. Chan, A parabolic equation with nonlocal boundary conditions arising from electrochemistry, Nonlinear Anal., 18 (1992), 317–331. https://doi.org/10.1016/0362-546X(92)90148-8 doi: 10.1016/0362-546X(92)90148-8
    [5] J. Furter, M. Grinfeld, Local vs. non-local interactions in population dynamics, J. Math. Biology, 27 (1989), 65–80. https://doi.org/10.1007/BF00276081 doi: 10.1007/BF00276081
    [6] T. Li, A class of non-local boundary value problems for partial differential equations and its applications in numerical analysis, J. Comput. Appl. Math., 28 (1989), 49–62. https://doi.org/10.1016/0377-0427(89)90320-8 doi: 10.1016/0377-0427(89)90320-8
    [7] Y. Lin, Analytical and numerical solutions f or a class of nonlocal nonlinear asymptotic behavior of solutions of reaction-diffusion equations with nonlocal boundary conditions, J. Comput. Appl. Math., 88 (1998), 225–238. https://doi.org/10.1016/s0377-0427(97)00215-x doi: 10.1016/s0377-0427(97)00215-x
    [8] C. V. Pao, Asymptotic behavior of solutions of reaction-diffusion equations with nonlocal boundary conditions, J. Comput. Appl Math., 88 (1998), 225–238. https://doi.org/10.1016/S0377-0427(97)00215-X doi: 10.1016/S0377-0427(97)00215-X
    [9] G. Avalishvili, M. Avalishvili, B. Miara, Nonclassical problems with nonlocal initial conditions for second-order evolution equations, Asymptot. Anal., 76 (2012), 171–192. https://doi.org/10.3233/asy-2011-1065 doi: 10.3233/asy-2011-1065
    [10] J. Martín-Vaquero, A. Queiruga-Dios, A. H. Encinas, Numerical algorithms for diffusion-reaction problems with non-classical conditions, Appl. Math. Comput., 218 (2012), 5487–5495. https://doi.org/10.1016/j.amc.2011.11.037 doi: 10.1016/j.amc.2011.11.037
    [11] J. Martín-Vaquero, S. Sajavičius, The two-level finite difference schemes for the heat equation with nonlocal initial condition, Appl. Math. Comput., 342 (2019), 166–177. https://doi.org/10.1016/j.amc.2018.09.025 doi: 10.1016/j.amc.2018.09.025
    [12] Y. Lin, Finite difference solutions for parabolic equations with the time weighting initial conditions, Appl. Math. Comput., 65 (1994), 49–61. https://doi.org/10.1016/0096-3003(94)90165-1 doi: 10.1016/0096-3003(94)90165-1
    [13] R. Čiupaila, M. Sapagovas, K. Pupalaigė, M-matrices and convergence of finite difference scheme for parabolic equation with integral boundary condition, Math. Model. Anal., 25 (2020), 167–183. https://doi.org/10.3846/mma.2020.8023 doi: 10.3846/mma.2020.8023
    [14] D. E. Jackson, Error estimates for the semidiscrete finite element approximation of linear nonlocal parabolic equations, J. Appl. Math. Stoch. Anal., 5 (1992), 19–27. https://doi.org/10.1155/s1048953392000029 doi: 10.1155/s1048953392000029
    [15] D. E. Jackson, Iterative finite element approximations of solutions to parabolic equations with nonlocal conditions, Nonlinear Anal.-Theor., 50 (2002), 433–454. https://doi.org/10.1007/bf00276081 doi: 10.1007/bf00276081
    [16] L. Bougoffa, R. C. Rach, Solving nonlocal initial-boundary value problems for linear and nonlinear parabolic and hyperbolic partial differential equations by the adomian decomposition method, Appl. Math. Comput., 225 (2013), 50–61. https://doi.org/10.1016/j.amc.2013.09.011 doi: 10.1016/j.amc.2013.09.011
    [17] M. Tadi, M. Radenkovic, A numerical method for 1-d parabolic equation with nonlocal boundary conditions, Int. J. Comput. Math., 2014 (2014), 1–9. https://doi.org/10.1155/2014/923693 doi: 10.1155/2014/923693
    [18] Y. Lin, Y. Zhou, Solving the reaction-diffusion equations with nonlocal boundary conditions based on reproducing kernel space, Numer. Meth. Part. D. E., 25 (2009), 1468–1481. https://doi.org/10.1002/num.20409 doi: 10.1002/num.20409
    [19] S. Shu, L. Zhou, H. Yu, Error estimate and superconvergence of a high-accuracy difference scheme for solving parabolic equations with an integral two-space-variables condition, Adv. Appl. Math. Mech., 10 (2018), 362–389. https://doi.org/10.4208/aamm.OA-2017-0067 doi: 10.4208/aamm.OA-2017-0067
    [20] S. Shu, L. Zhou, H. Yu, Error estimates and superconvergence of a high-accuracy difference scheme for a parabolic inverse problem with unknown boundary conditions, Numer. Math.-Theory Me., 12 (2019), 1119–1140. https://doi.org/10.4208/nmtma.OA-2018-0019 doi: 10.4208/nmtma.OA-2018-0019
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(593) PDF downloads(40) Cited by(0)

Figures and Tables

Tables(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog