Loading [MathJax]/jax/element/mml/optable/GeneralPunctuation.js
Research article

Existence and multiplicity of solutions for generalized asymptotically linear Schrödinger-Kirchhoff equations

  • In this paper, we investigate the nonlinear Schrödinger-Kirchhoff equations on the whole space. By using the Morse index of the reduced Schrödinger operator, we show the existence and multiplicity of solutions for this problem with asymptotically linear nonlinearity via variational methods.

    Citation: Yuan Shan, Baoqing Liu. Existence and multiplicity of solutions for generalized asymptotically linear Schrödinger-Kirchhoff equations[J]. AIMS Mathematics, 2021, 6(6): 6160-6170. doi: 10.3934/math.2021361

    Related Papers:

    [1] Sunyoung Bu . A collocation methods based on the quadratic quadrature technique for fractional differential equations. AIMS Mathematics, 2022, 7(1): 804-820. doi: 10.3934/math.2022048
    [2] Xiaojun Zhou, Yue Dai . A spectral collocation method for the coupled system of nonlinear fractional differential equations. AIMS Mathematics, 2022, 7(4): 5670-5689. doi: 10.3934/math.2022314
    [3] Imran Talib, Md. Nur Alam, Dumitru Baleanu, Danish Zaidi, Ammarah Marriyam . A new integral operational matrix with applications to multi-order fractional differential equations. AIMS Mathematics, 2021, 6(8): 8742-8771. doi: 10.3934/math.2021508
    [4] Zahra Pirouzeh, Mohammad Hadi Noori Skandari, Kamele Nassiri Pirbazari, Stanford Shateyi . A pseudo-spectral approach for optimal control problems of variable-order fractional integro-differential equations. AIMS Mathematics, 2024, 9(9): 23692-23710. doi: 10.3934/math.20241151
    [5] Zhi-Yuan Li, Mei-Chun Wang, Yu-Lan Wang . Solving a class of variable order nonlinear fractional integral differential equations by using reproducing kernel function. AIMS Mathematics, 2022, 7(7): 12935-12951. doi: 10.3934/math.2022716
    [6] Xiaopeng Yi, Chongyang Liu, Huey Tyng Cheong, Kok Lay Teo, Song Wang . A third-order numerical method for solving fractional ordinary differential equations. AIMS Mathematics, 2024, 9(8): 21125-21143. doi: 10.3934/math.20241026
    [7] A. H. Tedjani, A. Z. Amin, Abdel-Haleem Abdel-Aty, M. A. Abdelkawy, Mona Mahmoud . Legendre spectral collocation method for solving nonlinear fractional Fredholm integro-differential equations with convergence analysis. AIMS Mathematics, 2024, 9(4): 7973-8000. doi: 10.3934/math.2024388
    [8] Obaid Algahtani, M. A. Abdelkawy, António M. Lopes . A pseudo-spectral scheme for variable order fractional stochastic Volterra integro-differential equations. AIMS Mathematics, 2022, 7(8): 15453-15470. doi: 10.3934/math.2022846
    [9] Chuanli Wang, Biyun Chen . An hp-version spectral collocation method for fractional Volterra integro-differential equations with weakly singular kernels. AIMS Mathematics, 2023, 8(8): 19816-19841. doi: 10.3934/math.20231010
    [10] Hind H. G. Hashem, Asma Al Rwaily . Investigation of the solvability of n- term fractional quadratic integral equation in a Banach algebra. AIMS Mathematics, 2023, 8(2): 2783-2797. doi: 10.3934/math.2023146
  • In this paper, we investigate the nonlinear Schrödinger-Kirchhoff equations on the whole space. By using the Morse index of the reduced Schrödinger operator, we show the existence and multiplicity of solutions for this problem with asymptotically linear nonlinearity via variational methods.



    In this paper, we introduce a numerical method based on the spectral collocation method to solve nonlinear fractional quadratic integral equations

    y(x)=a(x)+f(x,y(x))Γ(α)x0(xt)α1g(t,y(t))dt,    α(0,1],   x[0,1], (1.1)

    where Γ() is the gamma function, y(x) is the unknown function and a:[0,1]R is a given function. The functions f and g satisfy the following conditions:

    (1) The functions f, g: [0,1]×RR in (1.1) are continuous and bounded functions with

    M1:=sup(x,y)[0,1]×R|f(x,y)|,   M2:=sup(x,y)[0,1]×R|g(x,y)|.

    (2) The functions f and g satisfy the Lipschitz condition with respect to the second variable, i.e., there exist constants L1>0 and L2>0 such that, for all (x,y1) and (x,y2), we have

    |f(x,y1)f(x,y2)|L1|y1y2|,
    |g(x,y1)g(x,y2)|L2|y1y2|.

    Integral equations are used to model some practical physical problems in the theory of radiative transfer, kinetic theory of gases, neutron transport, and traffic theory [12,17,28,30,33]. Also, some applications in the load leveling problem of energy systems, airfoils and optimal control problems can be found in [24,25,39,40,41]. Existence and uniqueness theorems and some other properties of quadratic integral equations have been studied in [6,16,17,19,48]. So far, various numerical methods for solving quadratic integral equations have been introduced: Adomian decomposition method [17,18,60], repeated trapezoidal methods [18], modified hat functions method [37], piecewise linear functions method [38], Chebyshev cardinal functions method [27], etc.

    Spectral methods are a class of reliable techniques in solving various mathematical modeling of real-life phenomena. The general framework of these methods is based on approximating the solutions of the problems using a finite series of orthogonal polynomials as ciξi, where ξi are called basis functions and can be considered as Legendre, Chebyshev, Hermit, Jacobi polynomials and so on. Spectral methods have been developed to solve various types of fractional differential equations, such as [1,8,14,21,23,44,53,55,59]. The spectral collocation method is a powerful approach that provides high accuracy approximations for the solutions of both linear and nonlinear problems provided that these solutions are sufficiently smooth [9,11,22,58]. The spectral collocation methods based on some extended class of B-spline functions and finite difference formulation have been investigated to find the approximate solutions of time fractional partial differential equations [2,3,4,32,50]. In the last years, the extension of spectral methods based on fractional order basis functions have been developed for solving fractional differential and integral problems [5,20,29,35,52,56,57]. In these works, the authors constructed the fractional order basis functions by writing xxγ, (0<γ<1) in the standard basis functions.

    The Chelyshkov orthogonal polynomials were introduced in [10] and then used to solve various classes of differential and integral equations, mixed functional integro-differential equations [43], weakly singular integral equations [46,52], nonlinear Volttera-Hammerstian integral equations [7], multi-order fractional differential equations [51], two-dimensional Fredholm-Volterra integral equation [49], systems of fractional delay differential equations [36], Volterra-Hammerstein delay integral equations [47]. Some properties of Chelyshkov polynomials can be listed as follows:

    ● The Chelyshkov polynomials CN,n(x) can be expressed in terms of the Jacobi polynomials P(γ,δ)m(x) [9] by the following relation

    CN,n(x)=(1)NnxnP(0,2n+1)Nn(2x1)=Nj=n(1)jn(Nnjn)(N+j+1Nn)xj,  n=0,...N. (1.2)

    In the set {CN,n(x)}Nn=0, every member has degree N with Nn simple roots. Hence, for every N if the roots of the polynomial CN,0(x) are chosen as collocation points, then an accurate numerical collocation method can be derived (for more details see [10]).

    ● The Chelyshkov polynomials (1.2) are orthogonal on the interval [0,1] with respect to the weight function w(x)=1, i.e.,

    10CN,i(x)CN,j(x)dx={0,           ij,12i+1,   i=j.

    About the structure of numerical methods in solving the fractional quadratic integral equations there are various researches, however some of them have not considered the singular behavior of the solutions. Most of these methods that were considered lie in the class of spectral methods and attempt to solve the problem via integer-order polynomial basis. Nevertheless, the obtained numerical solutions do not provide good approximations, and hence the convergence rates of the obtained numerical solutions are not be acceptable. Therefore, these methods cannot be considered as a comprehensive tool in solving fractional integral equations of the form of Eq (1.1) due to the singular behavior of their solutions. These disadvantages motivated us to overcome this drawback by developing a spectral method based on proper basis functions such that covers both smooth and non-smooth solutions of Eq (1.1). In this paper, we introduce a spectral collocation method via implementing a sequence of fractional-order Chelyshkov polynomials as basis functions to produce the numerical solution of Eq (1.1) regarding the singular behavior of the exact solution. These polynomials are constructed by writing xxγ, (0<γ<1) in the standard Chelyshkov polynomials [10]; i.e.,

    ˆCN,n,γ(x):=CN,n,γ(xγ),   n=0,...,N,

    which have both integer and non-integer powers. In this paper, we first convert the Eq (1.1) into a system of integral equation with linear integral operator. Then, the numerical method is implemented to reduce this problem to a set of nonlinear algebraic equations. This allows us to determine the approximate solution of the Eq (1.1) with a high order of accuracy versus results of other numerical methods based on standard orthogonal basis functions.

    The contribution of this paper can be summarized as follows:

    ● In Theorems 4.1 and 4.2, we construct the operational matrices of fractional integration and multiplication based on fractional-order Chelyshkov polynomials with a simple calculative technique that is easy to implement in computer programming.

    ● The upper bound for the error vectors of the operational matrices is discussed in Theorems 4.4 and 4.5.

    ● The proposed numerical method is applied to an equivalent system of integral equations of the form (5.3), which includes a linear integral terms, to reduce the problem to a system of algebraic equations. This method is based on using simple operational matrix techniques so that, unlike other methods, it does not require any discretization, linearization, or perturbation (see Section 5).

    ● The approximate solution is expressed as a linear combination of fractional order terms of the form xiγ such that overcomes the drawback of the poor rate of convergence of the method. The accuracy of the method for solving the Eq (1.1) with non-smooth solutions is confirmed through theoretical and numerical results.

    ● The convergence analysis and numerical stability of the method are investigated.

    The content of this paper is organized as follows: Section 2 contains some necessary definitions that are used in the rest of the paper. The fractional order Chelyshkov polynomials and their properties are investigated in Section 3. The operational matrices of integration and product of the fractional order Chelyshkov polynomials are derived in Section 4. In Section 5, we explain the application of operational matrices with spectral collocation method to obtain the numerical solution of Eq (1.1). The convergence analysis of the method is studied in Section 6. In Section 7, some numerical results are presented to illustrate the accuracy and efficiency of the method. Section 8 is devoted to conclusion and future works.

    In this section, we recall some preliminary results which will be needed throughout the paper. With the development of theories of fractional derivatives and integrals, many definitions appear, such as Riemann-Liouville [45], which are described as follows:

    For uL1[a,b], the Riemann-Liouville fractional integral of order γR+0:=R+{0} is defined as

    Jγau(x)=1Γ(γ)xa(xt)γ1u(t)dt,     γ0. (2.1)

    For γ=0, set J0a:=I, the identity operator. Let u(x)=(xa)β for some β>1 and γ>0. Then

    Jγau(x)=Γ(β+1)Γ(γ+β+1)(xa)γ+β. (2.2)

    Let m=α, the operator Dαa defined by

    Dγau(x)=DmJmγau(x), (2.3)

    is called the Riemann-Liouville fractional differential operator of order γ. For γ=0, set D0a:=I, the identity operator. The Caputo fractional differential operator of order n is defined by

    Dγau(x)=Dγa[u(x)Tm1[u(x);a]], (2.4)

    whenever Dγa[u(x)Tm1[u(x);a]] exists, where Tm1[u(x);a] denotes the Taylor polynomial of degree m1 of the function u around the point a. In the case m=0 define Tm1[u(x);a]:=0. Under the above conditions it is easy to show that,

    Dγau(x)=Jmγau(m)(x). (2.5)

    For more details, see [13,45].

    Theorem 2.1. [42] (Generalized Taylor's formula). Suppose that Dkγ0u(x)C(0,1] for k=0,1,..., N+1. Then, we can write

    u(x)=Ni=0xiγΓ(iγ+1)Diγ0u(0+)+x(N+1)γΓ((N+1)γ+1)D(N+1)γ0u(ξ), (2.6)

    with 0<ξx, x(0,1]. Also, we have

    |u(x)Ni=0xiγΓ(iγ+1)Diγ0u(0+)|MγΓ((N+1)γ+1), (2.7)

    provided that |D(N+1)γ0u(ξ)|Mγ.

    This section includes the definition of fractional Chelyshkov polynomials (FCHPs) and some of its essential properties that will be used in the next sections. The FCHPs on the interval [0,1] are defined as [52]

    ˆCN,n,γ(x)=Nj=n(1)jn(Nnjn)(N+j+1Nn)xjγ,  0<γ<1, n=0,1,...,N. (3.1)

    These polynomials are orthogonal with respect to the weight function w(x)=xγ1:

    <ˆCN,i,γ(x),ˆCN,q,γ(x)>:=10ˆCN,i,γ(x)ˆCN,j,γ(x)w(x)dx={0,           ij,1γ(2i+1),    i=j. (3.2)

    For N=5, we have

    ˆC5,0,γ(x)=6105xγ+560x2γ1260x3γ+1260x4γ462x5γˆC5,1,γ(x)=35xγ280x2γ+756x3γ840x4γ+330x5γˆC5,2,γ(x)=56x2γ252x3γ+360x4γ165x5γˆC5,3,γ(x)=36x3γ90x4γ+55x5γˆC5,4,γ(x)=10x4γ11x5γˆC5,5,γ(x)=x5γ.

    It is shown that, every member in the set {ˆC5,i,γ(x)} has degree 5γ. Figure 1 shows the graphs of these polynomials for γ=12 on the interval [0,1].

    Figure 1.  Plots of ˆC5,i,γ(x) for i=0,...,5 and γ=12.

    Lemma 3.1. The fractional order Chelyshkov polynomial ˆCN,0,γ(x), has precisely N zeros in the form x1γi fori=1,...,N, where xi are zeros of the standard Chelyshkov polynomialCN,0(x) defined in (1.2).

    Proof. The Chelyshkov polynomial CN,0(x) can be written as

    CN,0(x)=(xx1)(xx2)...(xxN).

    Changing the variable x=tγ, yields

    ˆCN,0,γ(t)=(tγx1)(tγx2)...(tγxN),

    so, the zeros of ˆCN,0,γ(t) are

    ti=(xi)1γ,  i=1,...,N.

    Let MN=span{ˆCN,0,α(x),ˆCN,1,γ(x),...,ˆCN,N,γ(x)} be a subspace of the Hilbert space L2[0,1]. Since MN is a finite dimensional space, for every uL2[0,1] there exists a unique best approximation uNMN such that

    and there exist unique coefficients a_{0}, a_{1}, ..., a_{N} , such that

    \begin{equation} u_{N}(x) = \sum\limits_{n = 0}^{N}a_{n}\widehat{C}_{N,n,\gamma}(x) = \widehat{\mathbf{\Phi}}^{T}(x)\mathbf{A} = \mathbf{A}^{T}\widehat{\mathbf{\Phi}}(x), \end{equation} (3.3)

    where

    \begin{equation} \mathbf{A} = [a_{0},a_{1},...,a_{N}]^{T},\ \ \ \ \widehat{\mathbf{\Phi}}(x) = [\widehat{C}_{N,0,\gamma}(x),\widehat{C}_{N,1,\gamma}(x),...,\widehat{C}_{N,N,\gamma}(x)]^{T} \end{equation} (3.4)

    and

    \begin{equation} a_{n} = (2n +1)\gamma\int_{0}^{1}u(x)\widehat{C}_{N,n,\gamma}(x)w(x)dx. \end{equation} (3.5)

    Lemma 3.2. Suppose that D_{*0}^{k\gamma}u\in C(0, 1] for k = 0, 1, ..., N , and u_{N} is the bestapproximation of u defined by (3.3). Then, we have

    \lim\limits_{N\rightarrow \infty} \Vert u-u_{N} \Vert_{2} = 0,

    provided that \vert D_{*0}^{(N+1)\gamma}u(\xi) \vert \leq \mathcal{M}_{\gamma} .

    Proof. From Theorem 2.1, we have

    \begin{equation} \vert u(x)-\sum\limits_{i = 0}^{N}\frac{x^{i\gamma}}{\Gamma(i\gamma +1)}D_{*0}^{i\gamma}u(0^{+})\vert \leq \mathcal{M}_{\gamma}\frac{x^{(N+1)\gamma}}{\Gamma((N+1)\gamma +1)}. \end{equation} (3.6)

    Due to the fact that u_{N}\in M_{N} is the best approximation of u , we obtain

    \begin{eqnarray} \Vert u-u_{N} \Vert^{2}_{2} &\leq & \Vert u- \sum\limits_{i = 0}^{N}\frac{x^{i\gamma}}{\Gamma(i\gamma +1)}D_{*0}^{i\gamma}u(0^{+})\Vert^{2}_{2} \\ &\leq &\frac{\mathcal{M}^{2}_{\gamma}}{\bigg(\Gamma((N+1)\gamma +1)\bigg)^2}\int_{0}^{1}x^{2(N+1)\gamma}w(x)dx\\ & = & \frac{\mathcal{M}^{2}_{\gamma}}{\bigg(\Gamma((N+1)\gamma +1)\bigg)^2(2N+3)\gamma}. \end{eqnarray} (3.7)

    This yields

    \lim\limits_{N\rightarrow \infty} \Vert u-\widehat{u}_{N} \Vert_{2} = 0.

    Corollary 3.1. From Lemma 3.2, for the approximate solution u_{N}(x) (3.3), we have the following error bound

    \begin{equation} \Vert u-\widehat{u}_{N} \Vert_{2} = O\left( \frac{1}{\bigg(\Gamma((N+1)\gamma +1)\bigg)\sqrt{(2N+3)\gamma}}\right). \end{equation} (3.8)

    In this section, we obtain the operational matrix of fractional integration \widehat{\mathbf{\Phi}}(x) and the one of the product of vectors \widehat{\mathbf{\Phi}}(x) and \widehat{\mathbf{\Phi}}^{T}(x) . These operational matrices have major role in reducing the Eq (1.1) to a system of algebraic equations.

    Theorem 4.1. Let \widehat{\mathbf{\Phi}}(x) be the FCHPs vector defined in (3.4) and suppose \gamma\in(0, 1] . Then,

    J_{0}^{\alpha}\widehat{\mathbf{\Phi}}(x) \simeq P \widehat{\mathbf{\Phi}}(x),

    with P is the (N+1)\times(N+1) fractional integral operational matrix and is given by

    P = \left[ \begin{array}{cccc} \Theta(0,0) & \Theta(0,1) & \ldots & \Theta(0,N) \\ \Theta(1,0) & \Theta(1,1) & \cdots & \Theta(1,N) \\ \vdots & \vdots & \ddots & \vdots \\ \Theta(N,0) & \Theta(N,1) & \ldots & \Theta(N,N) \\ \end{array} \right],

    where

    \begin{equation} \Theta(n,k) = \sum\limits_{j = n}^{N}(-1)^{j-n}\binom{N-n}{j-n}\binom{N+j+1}{N-n} \frac{\Gamma (j\gamma +1)}{\Gamma (j\gamma+\alpha +1)}\xi_{j,k}, \end{equation} (4.1)

    and

    \xi_{j,k} = \gamma(2k+1) \sum\limits_{l = k}^{N}\frac{(-1)^{l-k}}{(j+l+1)\gamma+\alpha}\binom{N-k}{l-k}\binom{N+l+1}{N-k}.

    Proof. According to the definition of fractional integral (2.1), we have

    \begin{align} J_{0}^{\alpha}\widehat{C}_{N,n,\gamma}(x)& = \sum\limits_{j = n}^{N}(-1)^{j-n}\binom{N-n}{j-n}\binom{N+j+1}{N-n}J_{0}^{\alpha}x^{j\gamma} \\ & = \sum\limits_{j = n}^{N}(-1)^{j-n}\binom{N-n}{j-n}\binom{N+j+1}{N-n} \frac{\Gamma (j\gamma +1)}{\Gamma (j\gamma+\alpha +1)}x^{j\gamma+\alpha}. \end{align} (4.2)

    Now, by approximating x^{j\gamma+\alpha} in terms of \widehat{\mathbf{\Phi}}(x) , we have

    \begin{equation} x^{j\gamma+\alpha}\simeq \sum\limits_{k = 0}^{N}\xi_{j,k}\widehat{C}_{N,k,\gamma}(x), \end{equation} (4.3)

    where

    \begin{align} \xi_{j,k}& = \gamma(2k+1) \int_{0}^{1}x^{j\gamma+\alpha}\widehat{C}_{N,k,\gamma}(x)w(x)dx \\ & = \gamma(2k+1) \sum\limits_{l = k}^{N}(-1)^{l-k}\binom{N-k}{l-k}\binom{N+l+1}{N-k}\int_{0}^{1}x^{(j+l+1)\gamma+\alpha -1}dx \\ & = \gamma(2k+1) \sum\limits_{l = k}^{N}\frac{(-1)^{l-k}}{(j+l+1)\gamma+\alpha}\binom{N-k}{l-k}\binom{N+l+1}{N-k}. \end{align} (4.4)

    Therefore, we derive from (4.2) and (4.3) that

    \begin{align} J_{0}^{\alpha}\widehat{C}_{N,n,\gamma}(x)& = \sum\limits_{k = 0}^{N}\left(\sum\limits_{j = n}^{N}(-1)^{j-n}\binom{N-n}{j-n}\binom{N+j+1}{N-n} \frac{\Gamma (j\gamma +1)}{\Gamma (j\gamma+\alpha +1)}\xi_{j,k}\right)\widehat{C}_{N,k,\gamma}(x)\\ & = \sum\limits_{k = 0}^{N}\Theta(n,k)\widehat{C}_{N,k,\gamma}(x). \end{align} (4.5)

    This leads to the desired result.

    Theorem 4.2. If V = [v_{0}, v_{1}, ..., v_{N}]^{T} , then

    \begin{equation} \widehat{\mathbf{\Phi}}(x)\widehat{\mathbf{\Phi}}^{T}(x)V \simeq \widehat{V}\widehat{\mathbf{\Phi}}(x), \end{equation} (4.6)

    where

    \begin{equation} \widehat{V} = [\widehat{v}_{i,j}]_{i,j = 0}^{N},\ \ \ \widehat{v}_{i,j}: = \sum\limits_{l = 0}^{N}v_{l}\mu_{i,l,j}, \end{equation} (4.7)

    and v_{l} , \mu_{i, l, j} will be introduced through the proof.

    Proof. Let

    \begin{equation} \widehat{\mathbf{\Phi}}(x)\widehat{\mathbf{\Phi}}^{T}(x)V = \left[ \begin{array}{c} \sum\limits_{j = 0}^{N}v_{j}\widehat{C}_{N,0,\gamma}(x)\widehat{C}_{N,j,\gamma}(x) \\ \sum\limits_{j = 0}^{N}v_{j}\widehat{C}_{N,1,\gamma}(x)\widehat{C}_{N,j,\gamma}(x) \\ \vdots\\ \sum\limits_{j = 0}^{N}v_{j}\widehat{C}_{N,N,\gamma}(x)\widehat{C}_{N,j,\gamma}(x)\\ \end{array} \right]. \end{equation} (4.8)

    By approximating \widehat{C}_{N, i, \gamma}(x)\widehat{C}_{N, j, \gamma}(x) in terms of \widehat{\mathbf{\Phi}}(x) , we have

    \begin{equation} \widehat{C}_{N,i,\gamma}(x)\widehat{C}_{N,j,\gamma}(x)\simeq \sum\limits_{k = 0}^{N}\mu_{i,j,k}\widehat{C}_{N,k,\gamma}(x), \end{equation} (4.9)

    where

    \begin{align} \mu_{i,j,k}& = \gamma(2k+1) \int_{0}^{1}\widehat{C}_{N,i,\gamma}(x)\widehat{C}_{N,j,\gamma}(x)\widehat{C}_{N,k,\gamma}(x)w(x)dx. \end{align} (4.10)

    On the other hand, we can write \widehat{\mathbf{\Phi}}(x) = \mathbf{D}\widehat{X}(x) , where \widehat{X}(x) = [1, x^{\gamma}, ..., x^{N\gamma}]^{T} and \mathbf{D} is an upper triangular coefficient matrix (see [51] for details). Let \mathbf{D}_{i} denote the i -th row of \mathbf{D} . Therefore, we achieve

    \begin{align} \mu_{i,j,k}& = \gamma(2k+1) \int_{0}^{1}\widehat{C}_{N,i,\gamma}(x)\widehat{C}_{N,j,\gamma}(x)\widehat{C}_{N,k,\gamma}(x)w(x)dx \\ & = \gamma(2k+1) \int_{0}^{1}\mathbf{D}_{i}\widehat{X}(x)\widehat{X}^{T}(x)\mathbf{D}^{T}_{j}\widehat{C}_{N,k,\gamma}(x)w(x)dx \\ & = \mathbf{D}_{i}\bigg( \gamma(2k+1)\int_{0}^{1}\widehat{X}(x)\widehat{X}^{T}(x)\widehat{C}_{N,k,\gamma}(x)w(x)dx\bigg)\mathbf{D}^{T}_{j} \\ & = \mathbf{D}_{i}\mathbf{K}\mathbf{D}^{T}_{j}, \end{align} (4.11)

    where \mathbf{K} is the (N+1)\times(N+1) matrix given by

    \begin{align} [\mathbf{K}]_{r,s}& = \gamma(2k+1) \int_{0}^{1}x^{(r+s)\gamma}\widehat{C}_{N,k,\gamma}(x)w(x)dx \\ & = \gamma(2k+1) \sum\limits_{l = k}^{N}(-1)^{l-k}\binom{N-k}{l-k}\binom{N+l+1}{N-k}\int_{0}^{1}x^{(r+s+l+1)\gamma -1}dx \\ & = (2k+1) \sum\limits_{l = k}^{N}\frac{(-1)^{l-k}}{r+s+l+1}\binom{N-k}{l-k}\binom{N+l+1}{N-k}, \end{align} (4.12)

    for r, s, k = 0, ..., N . From (4.8) and (4.9), we obtain

    \begin{align} \sum\limits_{j = 0}^{N}v_{j}\widehat{C}_{N,i,\gamma}(x)\widehat{C}_{N,j,\gamma}(x)&\simeq \sum\limits_{j = 0}^{N}v_{j}\left(\sum\limits_{k = 0}^{N}\mu_{i,j,k} \widehat{C}_{N,k,\gamma}(x)\right)\\ & = \sum\limits_{k = 0}^{N}\left( \sum\limits_{j = 0}^{N}v_{j}\mu_{i,j,k}\right)\widehat{C}_{N,k,\gamma}(x)\\ & = \sum\limits_{k = 0}^{N}\widehat{v}_{i,k}\widehat{C}_{N,k,\gamma}(x), \end{align} (4.13)

    for i = 0, 1, ..., N . This leads to the desired result.

    Now, we find the upper bound for the error vector of the operational matrix P defined in Theorem 4.1. To this end, first we state the following theorems:

    Theorem 4.3. [31]Suppose that H is a Hilbert space and U = span\lbrace u_{1}, u_{2}..., u_{N}\rbrace is a closed subspace of H . Let u be an arbitrary element in H and u^{*}\in U be the unique best approximation to u\in H . Then,

    \Vert u-u^{*} \Vert_{2}^{2} = \frac{G(u,u_{1},u_{2},...,u_{N})}{G(u_{1},u_{2},...,u_{N})},

    where

    G(u,u_{1},u_{2},...,u_{N}) = \left| \begin{array}{cccc} < u,u > & < u,u_{1} > & \ldots & < u,u_{N} > \\ < u_{1},u > & < u_{1},u_{1} > & \ldots & < u_{1},u_{N} > \\ \vdots & \vdots &\vdots & \vdots \\ < u_{N},u > & < u_{N},u_{1} > & \ldots & < u_{N},u_{N} > \\ \end{array} \right|.

    Theorem 4.4. Let

    \begin{equation*} \label{E0} E_{I,\alpha}(x) = J_{0}^{\alpha}\widehat{\mathbf{\Phi}}(x)-P\widehat{\mathbf{\Phi}}(x), \end{equation*}

    be the error vector of the operational matrix P defined in Theorem 4.1. Then,

    \Vert e_{j,\alpha}\Vert_{2} \\ \leq \sum\limits_{i = j}^{N}\binom{N-j}{i-j}\binom{N+i+1}{N-j} \frac{\Gamma (i\gamma +1)}{\Gamma (i\gamma+\alpha +1)}\left(\frac{G\bigg(x^{i\gamma+\alpha},\widehat{C}_{N,0,\gamma}(x),\widehat{C}_{N,1,\gamma}(x),...,\widehat{C}_{N,N,\gamma}(x)\bigg)}{G\bigg(\widehat{C}_{N,0,\gamma}(x),\widehat{C}_{N,1,\gamma}(x),...,\widehat{C}_{N,N,\gamma}(x)\bigg)}\right)^{1/2}

    and

    \begin{equation} \Vert E_{I,\alpha}\Vert_{2} \rightarrow 0, \end{equation} (4.14)

    where e_{j, \alpha}(x) is the j-th component of E_{I, \alpha}(x) and \Vert E_{I, \alpha}\Vert_{2}: = \bigg(\sum_{j} \Vert e_{j, \alpha}\Vert^{2}_{2}\bigg)^{{\frac{1}{2}}} .

    Proof. We have

    \begin{equation} e_{j,\alpha}(x) = \sum\limits_{i = j}^{N}(-1)^{i-j}\binom{N-j}{i-j}\binom{N+i+1}{N-j} \frac{\Gamma (i\gamma +1)}{\Gamma (i\gamma+\alpha +1)}\left( x^{i\gamma+\alpha}-\sum\limits_{k = 0}^{N}\xi_{k,i}\widehat{C}_{N,k,\gamma}(x)\right), \end{equation} (4.15)

    for j = 0, 1, ..., N . From Theorem 4.3, we can write

    \begin{equation} \Vert x^{i\gamma+\alpha}-\sum\limits_{k = 0}^{N}\xi_{k,i}\widehat{C}_{N,k,\gamma}(x) \Vert_{2} = \left(\frac{G\bigg(x^{i\gamma+\alpha},\widehat{C}_{N,0,\gamma}(x),\widehat{C}_{N,1,\gamma}(x),...,\widehat{C}_{N,N,\gamma}(x)\bigg)}{G\bigg(\widehat{C}_{N,0,\gamma}(x),\widehat{C}_{N,1,\gamma}(x),...,\widehat{C}_{N,N,\gamma}(x)\bigg)}\right)^{1/2}. \end{equation} (4.16)

    From (4.15) and (4.16), we obtain

    \Vert e_{j,\alpha}\Vert_{2}\\ \leq \sum\limits_{i = j}^{N}\binom{N-j}{i-j}\binom{N+i+1}{N-j} \frac{\Gamma (i\gamma +1)}{\Gamma (i\gamma+\alpha +1)}\left(\frac{G\bigg(x^{i\gamma+\alpha},\widehat{C}_{N,0,\gamma}(x),\widehat{C}_{N,1,\gamma}(x),...,\widehat{C}_{N,N,\gamma}(x)\bigg)}{G\bigg(\widehat{C}_{N,0,\gamma}(x),\widehat{C}_{N,1,\gamma}(x),...,\widehat{C}_{N,N,\gamma}(x)\bigg)}\right)^{1/2}. (4.17)

    By considering the above results and Lemma 3.2, we can conclude that

    \Vert E_{I,\alpha}\Vert_{2}\rightarrow 0, \ \ \ N\rightarrow \infty.

    Theorem 4.5. Let

    E_{P,\alpha}(x) = \widehat{\mathbf{\Phi}}(x)\widehat{\mathbf{\Phi}}^{T}(x)V-\widehat{V}\widehat{\mathbf{\Phi}}(x),

    be the error vector of the operational matrix of \widehat{V} defined in Theorem 4.2. Then, a similar proof for \Vert E_{P, \alpha}\Vert_{2} can be obtained, since from (4.9) and Theorem 4.3, we have

    \Vert \widehat{C}_{N,i,\gamma}(x)\widehat{C}_{N,j,\gamma}(x)-\sum\limits_{k = 0}^{N}\mu_{i,j,k} \widehat{C}_{N,k,\gamma}(x)\Vert_{2} = \left(\frac{G\bigg(\widehat{C}_{N,i,\gamma}(x)\widehat{C}_{N,j,\gamma}(x),\widehat{C}_{N,0,\gamma}(x),\widehat{C}_{N,1,\gamma}(x),...,\widehat{C}_{N,N,\gamma}(x)\bigg)}{G\bigg(\widehat{C}_{N,0,\gamma}(x),\widehat{C}_{N,1,\gamma}(x),...,\widehat{C}_{N,N,\gamma}(x)\bigg)}\right)^{1/2}.

    For example, for N = 5 , \alpha = \gamma = \frac{1}{2} , the following upper bound for components of E_{I, \frac{1}{2}}(x) can be achieved:

    \begin{align*} &\Vert e_{0,\frac{1}{2}}\Vert_{2}\leq 2.3639\times 10^{-1},\ \ \Vert e_{1,\frac{1}{2}}\Vert_{2}\leq 1.6885\times 10^{-1},\ \ \Vert e_{2,\frac{1}{2}}\Vert_{2}\leq 8.4424\times 10^{-2},\\ &\Vert e_{3,\frac{1}{2}}\Vert_{2}\leq 2.8141\times 10^{-2},\ \ \Vert e_{4,\frac{1}{2}}\Vert_{2}\leq 5.6283\times 10^{-3},\ \ \Vert e_{5,\frac{1}{2}}\Vert_{2}\leq 5.1166\times 10^{-4}.\ \end{align*}

    Hence,

    E_{I,1/2}(x)\leq \left[ \begin{array}{c} 2.3639\times 10^{-1} \\ 1.6885\times 10^{-1} \\ 8.4424\times 10^{-2} \\ 2.8141\times 10^{-2} \\ 5.6283\times 10^{-3} \\ 5.1166\times 10^{-4} \\ \end{array} \right].

    By using the definition of Riemann-Liouville fractional integral in (2.1) we can rewrite the Eq (1.1) in the form

    \begin{eqnarray} y(x) = a(x)+f(x,y(x))J_{0}^{\alpha}g(x,y(x)). \end{eqnarray} (5.1)

    Based on the implicit collocation method [15], let

    \begin{equation} w_{1}(x) = f(x,y(x)),\ \ \ w_{2}(x) = g(x,y(x)). \end{equation} (5.2)

    From Eqs (5.1) and (5.2), we have

    \begin{equation} \left\{ \begin{array}{ll} w_{1}(x) = f\bigg(x,a(x)+w_{1}(x)J_{0}^{\alpha}w_{2}(x) \bigg), \\ w_{2}(x) = g\bigg(x,a(x)+w_{1}(x)J_{0}^{\alpha}w_{2}(x)\bigg), \end{array} \right. \end{equation} (5.3)

    The integral operator in (5.3) is linear and therefore application of the operational matrices becomes straightforward. The functions w_{1}(x) and w_{2}(x) can be approximated as follows

    \begin{equation} \left\{ \begin{array}{ll} w_{1}(x)\simeq w_{N,1}(x) = \sum\limits_{i = 0}^{N}w_{i,1}\widehat{C}_{N,i,\gamma}(x) = \widehat{\mathbf{\Phi}}^{T}(x)\mathbf{W}_{1}, \\ w_{2}(x)\simeq w_{N,2}(x) = \sum\limits_{i = 0}^{N}w_{i,2}\widehat{C}_{N,i,\gamma}(x) = \widehat{\mathbf{\Phi}}^{T}(x)\mathbf{W}_{2}, \end{array} \right. \end{equation} (5.4)

    where \mathbf{W}_{i} = [w_{i, 0}, w_{i, 1}, ..., w_{i, N}]^{T} are the unknown coefficient vectors for i = 1, 2 . By applying Theorems 4.1 and 4.2, we get

    \begin{align} w_{1}(x)J_{0}^{\alpha}w_{2}(x) &\simeq \mathbf{W}_{1}^{T} \widehat{\mathbf{\Phi}}(x)J_{0}^{\alpha}\widehat{\mathbf{\Phi}}^{T}(x)\mathbf{W}_{2}\simeq \mathbf{W}_{1}^{T} \widehat{\mathbf{\Phi}}(x)\widehat{\mathbf{\Phi}}^{T}(x)P ^{T}\mathbf{W}_{2}\simeq \widehat{\mathbf{\Phi}}^{T}(x)\widehat{\mathbf{W}}_{1}^{T}P ^{T}\mathbf{W}_{2}. \end{align} (5.5)

    From (5.4) and (5.5), the system (5.3) can be written as follows:

    \begin{equation} \left\{ \begin{array}{ll} \widehat{\mathbf{\Phi}}^{T}(x)\mathbf{W}_{1} \simeq f\left(x,a(x)+\widehat{\mathbf{\Phi}}^{T}(x)\widehat{\mathbf{W}}_{1}^{T}P ^{T}\mathbf{W}_{2}\right), \\ \widehat{\mathbf{\Phi}}^{T}(x)\mathbf{W}_{2} \simeq g\left(x,a(x)+\widehat{\mathbf{\Phi}}^{T}(x)\widehat{\mathbf{W}}_{1}^{T}P ^{T}\mathbf{W}_{2}\right). \end{array} \right. \end{equation} (5.6)

    By collocating Eq (5.6) at the points \widehat{x}_{i} = x_{i}^{\frac{1}{\gamma}} , the zeros of \widehat{C}_{N+1, 0, \gamma}(x) , we obtain the following system of nonlinear algebraic equations

    \begin{equation} \left\{ \begin{array}{ll} \widehat{\mathbf{\Phi}}^{T}(\widehat{x_{i}})\mathbf{W}_{1} = f\left(\widehat{x_{i}},a(\widehat{x_{i}})+\widehat{\mathbf{\Phi}}^{T}(\widehat{x_{i}})\widehat{\mathbf{W}}_{1}^{T}P ^{T}\mathbf{W}_{2}\right),\\ \\ \widehat{\mathbf{\Phi}}^{T}(\widehat{x_{i}})\mathbf{W}_{2} = g\left(\widehat{x_{i}},a(\widehat{x_{i}})+\widehat{\mathbf{\Phi}}^{T}(\widehat{x_{i}})\widehat{\mathbf{W}}_{1}^{T}P ^{T}\mathbf{W}_{2}\right). \end{array} \right. \end{equation} (5.7)

    This nonlinear system can be solved for the unknown vectors \mathbf{W}_{1} and \mathbf{W}_{2} . We employed the "fsolve" command in Maple for solving this system. Finally, the approximate solution of the Eq (1.1) is obtained as follows:

    \begin{equation} y_{N}(x) = a(x)+w_{N,1}(x)J_{0}^{\alpha}w_{N,2}(x). \end{equation} (5.8)

    We present the algorithm of the method which is used to solve the numerical examples:

    Algorithm:

    Input: The numbers \alpha , \gamma ; the functions a(.) , f(., .) and g(., .) .

    Step 1. Choose N and construct the vector basis \widehat{\mathbf{\Phi}} using relation (3.1).

    Step 2. Compute the operational matrices P and \widehat{V} using Theorems 4.1 and 4.2.

    Step 3. Compute the relation (5.5).

    Step 4. Generate \widehat{x}_{i} for i = 0, ..., N , the roots of \widehat{C}_{N+1, 0, \gamma}(x) (5.5).

    Step 5. Construct the nonlinear system of algebraic Eq (5.7) by using the nodes \widehat{x}_{i} .

    Step 6. Solve the system obtained in Step 5 to determine the vectors \mathbf{W}_{1} and \mathbf{W}_{2} .

    Output: The approximate solution (5.8).

    In this section, we investigate the convergence of the proposed method in the space L_{2}[0, 1] .

    Theorem 6.1. Assume that w_{i}(x) and w_{i, N}(x) are the exact and approximate solutions of Problems (5.3) and (5.6), respectively, and that Conditions (1) and (2) are satisfied. Then,

    \lim\limits_{N\rightarrow \infty} \Vert e_{i} \Vert_{2} = 0,\ \ \ i = 1,2,

    provided that

    \begin{equation} 0 < (L_{1}+L_{2})M_{1} < 1,\ \ \ \ 0 < (L_{1}+L_{2})M_{2} < 1, \end{equation} (6.1)

    in which e_{i, N}: = w_{i}-w_{i, N} are called the error functions.

    Proof. By subtracting (5.6) from (5.3) and using Condition (2), we get

    \begin{equation} \left\{ \begin{array}{ll} \vert e _{1,N}(x) \vert \leq L_{1}\vert w_{1}(x)J_{0}^{\alpha}w_{2}(x) -\widehat{\mathbf{\Phi}}^{T}(x)\widehat{\mathbf{W}}_{1}^{T}P ^{T}\mathbf{W}_{2}\vert, \\ \\ \vert e _{2,N}(x)\vert \leq L_{2}\vert w_{1}(x)J_{0}^{\alpha}w_{2}(x) -\widehat{\mathbf{\Phi}}^{T}(x)\widehat{\mathbf{W}}_{1}^{T}P ^{T}\mathbf{W}_{2}\vert. \end{array} \right. \end{equation} (6.2)

    The Relation (6.2) can be written as

    \begin{equation} \left\{ \begin{array}{ll} \vert e _{1,N}(x) \vert \leq L_{1}\vert w_{1}(x)J_{0}^{\alpha}w_{2}(x) - w_{1,N}(x)J_{0}^{\alpha}w_{2,N}(x)\vert+L_{1}\vert E_{\alpha}(x)\vert, \\ \\ \vert e _{2,N}(x)\vert \leq L_{2}\vert w_{1}(x)J_{0}^{\alpha}w_{2}(x) -w_{1,N}(x)J_{0}^{\alpha}w_{2,N}(x)\vert+L_{2}\vert E_{\alpha}(x)\vert, \end{array} \right. \end{equation} (6.3)

    where

    \mathcal{E}_{\alpha}(x) = w_{1,N}(x)J_{0}^{\alpha}w_{2,N}(x)-\widehat{\mathbf{\Phi}}^{T}(x)\widehat{\mathbf{W}}_{1}^{T}P ^{T}\mathbf{W}_{2}.

    From Theorems 4.4 and 4.5, we can conclude that \Vert \mathcal{E}_{\alpha}\Vert_{2}\rightarrow 0 as N\rightarrow \infty . Using Condition (1) and the Cauchy-Schwarz inequality, we get

    \vert w_{1}(x)J_{0}^{\alpha}w_{2}(x)-w_{1,N}(x)J_{0}^{\alpha}w_{2,N}(x) \vert \\ \leq \vert w_{1}(x) J_{0}^{\alpha}w_{2}(x)-w_{1}(x)J_{0}^{\alpha}w_{2,N}(x)+ w_{1}(x)J_{0}^{\alpha}w_{2,N}(x)-w_{1,N}(x)J_{0}^{\alpha}w_{2,N}(x) \vert \\ \leq \vert w_{1}(x)\vert \vert J_{0}^{\alpha}w_{2}(x)-J_{0}^{\alpha}w_{2,N}(x) \vert +\vert w_{1}(x)-w_{1,N}(x)\vert \vert J_{0}^{\alpha}w_{2,N}(x) \vert \\ \leq M_{1} \vert J_{0}^{\alpha}e_{2,N}(x) \vert +\vert e_{1,N}(x)\vert \vert J_{0}^{\alpha}w_{2}(x) \vert+\vert e_{1,N}(x)\vert \vert J_{0}^{\alpha}e_{2,N}(x) \vert \\ \leq M_{1} \Vert e_{2,N}\Vert_{2}+M_{2}\vert e_{1,N}(x) \vert+\vert e_{1,N}(x)\vert\Vert e_{2}\Vert_{2}, (6.4)

    therefore, from (6.3) and (6.4), we can write

    \begin{equation} \left\{ \begin{array}{ll} \Vert e _{1,N} \Vert_{2} \leq L_{1}M_{1} \Vert e_{2,N}\Vert_{2}+L_{1}M_{2}\Vert e_{1,N} \Vert_{2}+L_{1}\Vert e_{1,N}\Vert_{2,N}\Vert e_{2,N}\Vert_{2},\\ \\ \Vert e _{2,N}\Vert_{2} \leq L_{2}M_{1} \Vert e_{2,N}\Vert_{2}+L_{2}M_{2}\Vert e_{1,N} \Vert_{2}+L_{2}\Vert e_{1,N}\Vert_{2}\Vert e_{2,N}\Vert_{2}. \end{array} \right. \end{equation} (6.5)

    By ignoring the term \Vert e_{1}\Vert\Vert e_{2}\Vert_{2} in (6.5), we obtain

    \begin{equation*} \Vert e _{1,N} \Vert_{2,N}+ \Vert e _{2,N}\Vert_{2} \leq \bigg(L_{1}M_{1}+ L_{2}M_{1}\bigg) \Vert e_{2,N}\Vert_{2} +\bigg(L_{1}M_{2}+L_{2}M_{2}\bigg)\Vert e_{1,N} \Vert_{2}, \end{equation*}

    which yields

    \begin{equation*} \bigg(1-L_{1}M_{2}-L_{2}M_{2}\bigg)\Vert e _{1,N} \Vert_{2}+ \bigg(1-L_{1}M_{1}-L_{2}M_{1}\bigg) \Vert e _{2,N}\Vert_{2} \leq 0. \end{equation*}

    Now according to inequalities expressed in (6.1), the proof is complete.

    Theorem 6.2. Suppose that y(x) and y_{N}(x) are the exact solution and approximate solution of Eq (1.1), respectively and that Conditions (1), (2) and Relation (6.1) hold. Then, we have

    \begin{equation} \lim\limits_{N\rightarrow \infty} \Vert y-y_{N}\Vert_{2} = 0. \end{equation} (6.6)

    Proof. By subtracting (5.8) from (5.1), we get

    y(x)-y_{N}(x) = w_{1}(x)J_{0}^{\alpha}w_{2}(x)-w_{N,1}(x)J_{0}^{\alpha}w_{N,2}(x).

    Therefore, from Theorem 6.1 we can conclude that (6.6) is valid.

    In this section, we provide some numerical examples to illustrate the efficiency and accuracy of the method. All calculations are performed in Maple 2018. The results are compared to the ones obtained using spectral collocation based on standard Chelyshkov basis polynomials (\gamma = 1) [43], Taylor-collocation method [54] and Chebyshev cardinal functions method [27]. The computational error norm (\Vert E_{N}\Vert_{2}) is calculated in order to test the accuracy of the method as:

    E_{N}(x): = \vert y(x)-y_{N}(x)\vert,
    \Vert E_{N}\Vert_{2}: = \sqrt{\frac{\sum\nolimits_{i = 0}^{N}E_{N}^{2}(x_{i})}{N}},\ \ \ \ \ (x_{i} = ih,\ \ Nh = 1).

    To investigate the numerical stability of the method, we solve the perturbed Eq (1.1) of the form

    \begin{eqnarray*} y(x) = a^{\epsilon}(x)+\frac{f^{\epsilon}(x,y(x))}{\Gamma (\alpha)}\int_{0}^{x}(x-t)^{\alpha-1}g^{\epsilon}(t,y(t))dt, \ \ \ \ x\in [0,1], \end{eqnarray*}

    with \epsilon = 10^{-3}, 10^{-6}, 10^{-9} .

    Example 7.1. Consider the fractional quadratic integral equation

    \begin{equation} y(x) = \frac{-1}{15}\sin(\sqrt{x})\bigg(\sqrt{\pi x} BesselJ(1,\sqrt{x})-15\bigg)+\frac{y(x)}{20\Gamma (1/2)} \int_{0}^{x}(x-t)^{\frac{-1}{2}}y(t)dt, \end{equation} (7.1)

    in which BesselJ(\cdot, \cdot) denotes the Bessel function of the first kind. The exact solution of this problem is y(x) = \sin(\sqrt{x}) . In Table 1, the \Vert E_{N}\Vert_{2} -errors for \gamma = \frac{1}{4}, \frac{1}{2} and \gamma = 1 (standard Chelyshkov polynomials [43]) are given, and also the CPU-times are computed. From this table, we see that fractional order basis functions get approximate solutions with higher accuracy than the integer order basis functions.

    Table 1.  Comparison of the \Vert E_{N}\Vert_{2} -error for different values of \gamma in Example 7.1.
    \gamma=\frac{1}{4} \text{CPU-Time} \gamma=\frac{1}{2} \text{CPU-Time} \gamma=1 \text{CPU-Time}
    N=3 2.391962\times10^{-4} 1.326s 1.866606\times10^{-6} 1.124s 2.172051\times10^{-4} 0.811s
    N=5 7.866499\times10^{-7} 2.387s 7.046119\times10^{-8} 2.169s 6.374901\times10^{-5} 1.872s
    N=7 3.493808\times10^{-8} 5.180s 4.002629\times10^{-10} 4.602s 2.764246\times10^{-5} 4.742s
    N=9 3.942763\times10^{-9} 10.031s 1.252167\times10^{-12} 9.220s 1.465058\times10^{-5} 8.549s
    N=11 6.300361\times10^{-12} 21.762s 2.556736\times10^{-15} 19.984s 8.758143\times10^{-6} 17.316s

     | Show Table
    DownLoad: CSV

    Table 2 presents a comparison between the numerical results given by our method and the ones obtained using Taylor-collocation method [54] and Chebyshev cardinal functions method [27].

    Table 2.  The \Vert E_{N}\Vert_{2} -error of the our method and [27,54] for Example 7.1.
    \text{Our method} (\gamma=\frac{1}{2}) \text{Taylor-collocation} \text{method} [54] \text{Chebyshev cardinal functions} \text{method} [27]
    N=3 1.866606\times10^{-6} 8.022061\times10^{-4} -
    N=5 7.046119\times10^{-8} 2.793213\times10^{-4} 5.457252\times10^{-5}
    N=7 4.002629\times10^{-10} 1.433133\times10^{-4} 3.071930\times10^{-5}
    N=9 1.252167\times10^{-12} 8.802562\times10^{-5} 1.672349\times10^{-5}
    N=11 2.556736\times10^{-15} 5.997519\times10^{-5} 8.332697\times10^{-6}

     | Show Table
    DownLoad: CSV

    Figure 2 shows that the spectral accuracy of our method with \gamma = \frac{1}{2} is achieved because the semi-logarithmic representation of the errors has almost similar behavior with the test line (dash-dot line). This line is the semi-logarithmic graph of exp(-N) .

    Figure 2.  The \Vert E_{N}\Vert_{2} -errors for different values of N in Example 7.1 with \gamma = \frac{1}{2} (solid lines) and \gamma = 1 (dashed lines).

    In Table 3, we solved the perturbed problem with \epsilon = 10^{-3}, 10^{-6}, 10^{-9} to investigate the stability of our method. The obtained numerical results in this example confirm the high-order rate of convergence and stability of the proposed method, as well as the agreement with the obtained theoretical results.

    Table 3.  The \Vert E_{N}\Vert_{2} -error of perturbed problem of Example 7.1.
    \epsilon N=5 \text{CPU-Time} N=9 \text{CPU-Time}
    10^{-3} 7.077009\times10^{-8} 2.246s 1.257245\times10^{-12} 9.297s
    10^{-6} 7.046150\times10^{-8} 2.184s 1.252172\times10^{-12} 9.157s
    10^{-9} 7.046119\times10^{-8} 2.262s 1.252167\times10^{-12} 9.220s

     | Show Table
    DownLoad: CSV

    Example 7.2. Consider the fractional quadratic integral equation

    \begin{equation*} y(x) = x^3+\frac{1}{40}x^{12}+\frac{xy(x)}{5\Gamma(\alpha)} \int_{0}^{x}(x-t)^{\alpha-1}ty^{2}(t)dt. \end{equation*}

    The exact solution for \alpha = 1 is y(x) = x^{3} .

    In this example, we study the applicability of the proposed method when the exact solution does not exist. Table 4 shows the numerical results for N = 10 and various values of \alpha , \gamma . From these results, it is seen that the approximate solution converges to the exact solution as \alpha\rightarrow1 . The semi-log representation of errors for different values of N with \alpha = \gamma = 1 confirm the spectral (exponential) rate of convergence of our method in Figure 3.

    Table 4.  The obtained approximate solutions for \alpha = 0.7, 0.8, 0.9, 0.95 with \gamma = \alpha and N = 10 in Example 7.2.
    \alpha \backslash x x=0.2 x=0.4 x=0.6 x=0.8 x=1 CPU-Time
    \gamma=\alpha=0.70 8.000000548\times 10^{-3} 6.400060857\times 10^{-2} 2.160655858\times 10^{-1} 5.137819785\times 10^{-1} 1.024852816 15.210s
    \gamma=\alpha=0.80 8.000000046\times 10^{-3} 6.400035132\times 10^{-2} 2.160379087\times 10^{-1} 5.130438155\times 10^{-1} 1.014435189 15.319s
    \gamma=\alpha=0.90 8.000000093\times 10^{-3} 6.400015092\times 10^{-2} 2.160165000\times 10^{-1} 5.124608263\times 10^{-1} 1.006347436 15.585s
    \gamma=\alpha=0.95 8.000000050\times 10^{-3} 6.400006977\times 10^{-2} 2.160077126\times 10^{-1} 5.122168920\times 10^{-1} 1.002985209 15.070s
    Exa. sol (\alpha=1) {\mathbf{8.000000000\times 10^{-3}}} {\mathbf{6.400000000\times 10^{-2}}} {\mathbf{2.160000000\times 10^{-1}}} {\mathbf{5.120000000\times 10^{-1}}} 1.0000000000 -

     | Show Table
    DownLoad: CSV
    Figure 3.  The \Vert E_{N}\Vert_{2} -errors for different values of N in Example 7.2 with \alpha = \gamma = 1 .

    Obtaining the analytical solution for integral equations is limited to a certain class of them. Therefore, it is required to derive appropriate numerical methods to solve them. A numerical method based on spectral collocation method is presented for solving nonlinear fractional quadratic integral equations. We used a new (fractional order) version of orthogonal Chelyshkov polynomials as basis functions. Also, the convergence of the method is investigated. The numerical results show the accuracy of the proposed method. Utilizing the new non-integer basis functions produces numerical results with high accuracy. The proposed method for problems on a large interval was not considered. As future work, this limitation may be considered by dividing the domain of the problem into sub-domains and applying the numerical method on them [26,34]. Also, this method can be applied to other kinds of integral equations such as cordial integral equations of quadratic type, quadratic integral equations systems and delay quadratic integral equations.

    The authors declare no conflict of interest.



    [1] A. Ambrosetti, P. H. Rabinowitz, Dual variational methods in critical point theory and applications, J. Funct. Anal., 14 (1973), 349-381. doi: 10.1016/0022-1236(73)90051-7
    [2] T. Bartsch, A. Pankov, Z. Q. Wang, Nonlinear Schrödinger equations with steep potential well, Commun. Contemp. Math., 3 (2001), 549-569. doi: 10.1142/S0219199701000494
    [3] Y. Ding, L. Jeanjean, Homoclinc orbits for a nonperiodic Hamiltonian system, J. Differ. Equations, 237 (2007), 473-490. doi: 10.1016/j.jde.2007.03.005
    [4] I. Ekeland, Convexity Methods in Hamiltonian Mechanics, Berlin: Springer-Verlag, 1990.
    [5] X. He, W. Zou, Infinitely many positive solutions for Kirchhoff-type problems, Nonlinear Anal., 70 (2009), 1407-1414. doi: 10.1016/j.na.2008.02.021
    [6] X. He, W. Zou, Multiplicity of solutions for a class of Kirchhoff type problems, Acta Math. Appl. Sin. Engl. Ser., 26 (2010), 387-394. doi: 10.1007/s10255-010-0005-2
    [7] X. He, W. Zou, Ground states for nonlinear Kirchhoff equations with critical growth, Ann. Mat. Pura Appl., 193 (2014), 473-500. doi: 10.1007/s10231-012-0286-6
    [8] J. Jin, X. Wu, Infinitely many radial solutions for Kirchhoff-type problems in {\mathbb R}^N, J. Math. Anal. Appl., 369 (2010), 564-574. doi: 10.1016/j.jmaa.2010.03.059
    [9] G. Kirchhoff, Mechanik, Teubner, Leipzig: Dbuck Und Verlag Von B. G. Teubner, 1897.
    [10] L. Li, J. J. Sun, Existence and multiplicity of solutions for the Kirchhoff equations with asymptotically linear nonlinearities, Nonlinear Anal. Real World Appl., 26 (2015), 391-399. doi: 10.1016/j.nonrwa.2015.07.002
    [11] Q. Li, X. Wu, A new result on high energy solutions for Schrödinger-Kirchhoff type equations in {\mathbb R}^N, Appl. Math. Lett., 30 (2014), 24-27. doi: 10.1016/j.aml.2013.12.002
    [12] Y. Li, F. Li, J. Shi, Existence of a positive solution to Kirchhoff type problems without compactness conditions, J. Differ. Equations, 253 (2012), 2285-2294. doi: 10.1016/j.jde.2012.05.017
    [13] J. L. Lions, On some questions in boundary value problems of mathematical physics, North-Holland Math. Stud., 30 (1978), 284-346. doi: 10.1016/S0304-0208(08)70870-3
    [14] Z. Liu, J. Su, T. Weth, Compactness results for Schrödinger equations with asymptotically linear terms, J. Differ. Equations, 231 (2006), 501-512. doi: 10.1016/j.jde.2006.05.007
    [15] W. Liu, X. He, Multiplicity of high energy solutions for superlinear Kirchhoff equations, J. Appl. Math. Comput., 39 (2012), 473-487. doi: 10.1007/s12190-012-0536-1
    [16] Y. Long, Index Theory for Symplectic Paths with Applications, Basel: Birkhäuser, 2002.
    [17] T. F. Ma, J. E. Mu{{\rm{\bar n}}}oz Rivera, Positive solutions for a nonlinear nonlocal elliptic transmission problem, Appl. Math. Lett., 16 (2003), 243-248. doi: 10.1016/S0893-9659(03)80038-1
    [18] A. Mao, Z. Zhang, Sign-changing and multiple solutions of Kirchhoff type problems without the P.S. condition, Nonlinear Anal., 70 (2009), 1275-1287. doi: 10.1016/j.na.2008.02.011
    [19] K. Perera, Z. Zhang, Nontrivial solutions of Kirchhoff-type problems via the Yang index, J. Differ. Equations, 221 (2006), 246-255. doi: 10.1016/j.jde.2005.03.006
    [20] M. Reed, B. Simon, Methods of Modern Mathematical Physics, New York: Academic Press, 1978.
    [21] Y. Shan, Morse index and multiple solutions for the asymptotically linear Schrödinger type equation, Nonliear Anal., 89 (2013), 170-178. doi: 10.1016/j.na.2013.05.014
    [22] W. Shuai, Sign-changing solutions for a class of Kirchhoff-type problem in bounded domains, J. Differ. Equations, 259 (2015), 1256-1274. doi: 10.1016/j.jde.2015.02.040
    [23] B. Simon, Schrödinger semigroups, Bull. Am. Math. Soc., 7 (1982), 447-526. doi: 10.1090/S0273-0979-1982-15041-8
    [24] M. Struwe, Variational Methods: Applications to Nonlinear Partifal Differential Equations and Hamiltonian Systems, Berlin: Springer, 1990.
    [25] X. Wu, Existence of nontrivial solutions and high energy solutions for Schrödinger-Kirchhoff-type equations in {\mathbb R}^N, Nonlinear Anal. Real World Appl., 12 (2011), 1278-1287. doi: 10.1016/j.nonrwa.2010.09.023
    [26] Y. Wu, S. Liu, Existence and multiplicity of solutions for asymptotically linear Schrödinger-Kirchhoff equations, Nonlinear Anal. Real World Appl., 26 (2015), 191-198. doi: 10.1016/j.nonrwa.2015.05.010
    [27] Z. Yücedaǧ, Solutions of nonlinear problems involving p(x) Laplacian operator, Adv. Nonlinear Anal., 4 (2015), 285-293. doi: 10.1515/anona-2015-0044
    [28] F. Zhou, K. Wu, X. Wu, High energy solutions of systems of Kirchhoff-type equations on {\mathbb R}^N, Comput. Math. Appl., 66 (2013), 1299-1305. doi: 10.1016/j.camwa.2013.07.028
  • This article has been cited by:

    1. Manal Alqhtani, Khaled M. Saad, Rasool Shah, Thongchai Botmart, Waleed M. Hamanah, Evaluation of fractional-order equal width equations with the exponential-decay kernel, 2022, 7, 2473-6988, 17236, 10.3934/math.2022949
    2. Babak Azarnavid, The Bernoulli polynomials reproducing kernel method for nonlinear Volterra integro-differential equations of fractional order with convergence analysis, 2023, 42, 2238-3603, 10.1007/s40314-022-02148-y
    3. Sanda Micula, Iterative Numerical Methods for a Fredholm–Hammerstein Integral Equation with Modified Argument, 2022, 15, 2073-8994, 66, 10.3390/sym15010066
    4. Y. Talaei, P. M. Lima, An efficient spectral method for solving third-kind Volterra integral equations with non-smooth solutions, 2023, 42, 2238-3603, 10.1007/s40314-023-02333-7
    5. A. Z. Amin, M. A. Abdelkawy, Amr Kamel Amin, António M. Lopes, Abdulrahim A. Alluhaybi, I. Hashim, Legendre-Gauss-Lobatto collocation method for solving multi-dimensional systems of mixed Volterra-Fredholm integral equations, 2023, 8, 2473-6988, 20871, 10.3934/math.20231063
    6. Mohammad Izadi, Tayebeh Waezizadeh, Stability analysis and numerical evaluations of a COVID-19 model with vaccination, 2024, 24, 1471-2288, 10.1186/s12874-024-02209-2
    7. Sukanta Halder, , Solving generalized nonlinear functional integral equations with applications to epidemic models, 2024, 0170-4214, 10.1002/mma.10437
    8. L. R. Wang, G. He, Hermite-type collocation methods for volterra integral equations with weakly singular highly oscillatory Fourier kernels, 2025, 0020-7160, 1, 10.1080/00207160.2025.2462072
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2371) PDF downloads(132) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog