Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Lung cancer diagnosis from computed tomography scans using convolutional neural network architecture with Mavage pooling technique

  • Received: 01 September 2024 Revised: 23 October 2024 Accepted: 08 November 2024 Published: 09 January 2025
  • Background 

    Lung cancer is a deadly disease. An early diagnosis can significantly improve the patient survival and quality of life. One potential solution is using deep learning (DL) algorithms to automate the diagnosis using patient computed tomography (CT) scans. However, the limited availability of training data and the computational complexity of existing algorithms, as well as their reliance on high-performance systems, limit the potential of DL algorithms. To improve early lung cancer diagnoses, this study proposes a low-cost convolutional neural network (CNN) that uses a Mavage pooling technique to diagnose lung cancers.

    Methods 

    The DL-based model uses five convolution layers with two residual connections and Mavage pooling layers. We trained the CNN using two publicly available datasets comprised of the IQ_OTH/NCCD dataset and the chest CT scan dataset. Additionally, we integrated the Mavage pooling in the AlexNet, ResNet-50, and GoogLeNet architectures to analyze the datasets. We evaluated the performance of the models based on accuracy and the area under the receiver operating characteristic curve (AUROC).

    Results 

    The CNN model achieved a 99.70% accuracy and a 99.66% AUROC when the scans were classified as either cancerous or non-cancerous. It achieved a 90.24% accuracy and a 94.63% AUROC when the scans were classified as containing either normal, benign, or malignant nodules. It achieved a 95.56% accuracy and a 99.37% AUROC when lung cancers were classified. Additionally, the results indicated that the diagnostic abilities of AlexNet, ResNet-50, and GoogLeNet were improved with the introduction of the Mavage pooling technique.

    Conclusions 

    This study shows that a low-cost CNN can effectively diagnose lung cancers from patient CT scans. Utilizing Mavage pooling technique significantly improves the CNN diagnostic capabilities.

    Citation: Ayomide Abe, Mpumelelo Nyathi, Akintunde Okunade. Lung cancer diagnosis from computed tomography scans using convolutional neural network architecture with Mavage pooling technique[J]. AIMS Medical Science, 2025, 12(1): 13-27. doi: 10.3934/medsci.2025002

    Related Papers:

    [1] Zui-Cha Deng, Fan-Li Liu, Liu Yang . Numerical simulations for initial value inversion problem in a two-dimensional degenerate parabolic equation. AIMS Mathematics, 2021, 6(4): 3080-3104. doi: 10.3934/math.2021187
    [2] Jia Li, Zhipeng Tong . Local Hölder continuity of inverse variation-inequality problem constructed by non-Newtonian polytropic operators in finance. AIMS Mathematics, 2023, 8(12): 28753-28765. doi: 10.3934/math.20231472
    [3] Dun-Gang Li, Fan Yang, Ping Fan, Xiao-Xiao Li, Can-Yun Huang . Landweber iterative regularization method for reconstructing the unknown source of the modified Helmholtz equation. AIMS Mathematics, 2021, 6(9): 10327-10342. doi: 10.3934/math.2021598
    [4] Jia Li, Changchun Bi . Study of weak solutions of variational inequality systems with degenerate parabolic operators and quasilinear terms arising Americian option pricing problems. AIMS Mathematics, 2022, 7(11): 19758-19769. doi: 10.3934/math.20221083
    [5] Yu Xu, Youjun Deng, Dong Wei . Numerical solution of forward and inverse problems of heat conduction in multi-layered media. AIMS Mathematics, 2025, 10(3): 6144-6167. doi: 10.3934/math.2025280
    [6] Zuliang Lu, Fei Cai, Ruixiang Xu, Chunjuan Hou, Xiankui Wu, Yin Yang . A posteriori error estimates of hp spectral element method for parabolic optimal control problems. AIMS Mathematics, 2022, 7(4): 5220-5240. doi: 10.3934/math.2022291
    [7] Batirkhan Turmetov, Valery Karachik . On solvability of some inverse problems for a nonlocal fourth-order parabolic equation with multiple involution. AIMS Mathematics, 2024, 9(3): 6832-6849. doi: 10.3934/math.2024333
    [8] Yashar Mehraliyev, Seriye Allahverdiyeva, Aysel Ramazanova . On one coefficient inverse boundary value problem for a linear pseudoparabolic equation of the fourth order. AIMS Mathematics, 2023, 8(2): 2622-2633. doi: 10.3934/math.2023136
    [9] Guojie Zheng, Baolin Ma . Observability estimate for the parabolic equations with inverse square potential. AIMS Mathematics, 2021, 6(12): 13525-13532. doi: 10.3934/math.2021785
    [10] W. Y. Chan . Blow-up for degenerate nonlinear parabolic problem. AIMS Mathematics, 2019, 4(5): 1488-1498. doi: 10.3934/math.2019.5.1488
  • Background 

    Lung cancer is a deadly disease. An early diagnosis can significantly improve the patient survival and quality of life. One potential solution is using deep learning (DL) algorithms to automate the diagnosis using patient computed tomography (CT) scans. However, the limited availability of training data and the computational complexity of existing algorithms, as well as their reliance on high-performance systems, limit the potential of DL algorithms. To improve early lung cancer diagnoses, this study proposes a low-cost convolutional neural network (CNN) that uses a Mavage pooling technique to diagnose lung cancers.

    Methods 

    The DL-based model uses five convolution layers with two residual connections and Mavage pooling layers. We trained the CNN using two publicly available datasets comprised of the IQ_OTH/NCCD dataset and the chest CT scan dataset. Additionally, we integrated the Mavage pooling in the AlexNet, ResNet-50, and GoogLeNet architectures to analyze the datasets. We evaluated the performance of the models based on accuracy and the area under the receiver operating characteristic curve (AUROC).

    Results 

    The CNN model achieved a 99.70% accuracy and a 99.66% AUROC when the scans were classified as either cancerous or non-cancerous. It achieved a 90.24% accuracy and a 94.63% AUROC when the scans were classified as containing either normal, benign, or malignant nodules. It achieved a 95.56% accuracy and a 99.37% AUROC when lung cancers were classified. Additionally, the results indicated that the diagnostic abilities of AlexNet, ResNet-50, and GoogLeNet were improved with the introduction of the Mavage pooling technique.

    Conclusions 

    This study shows that a low-cost CNN can effectively diagnose lung cancers from patient CT scans. Utilizing Mavage pooling technique significantly improves the CNN diagnostic capabilities.



    The boundary value problems (BVPs) for differential equations have important applications in space science and engineering technology. A large number of mathematical models in the fields of engineering, astronomy, mechanics, economics, etc, are often described by differential BVPs [1,2,3]. Except for a few special types, the exact solution of the BVPs is difficult to express in analytical form. It is especially important to find an approximate solution to obtain its numerical solution. In [4], Sinc collocation method provided an exponential convergence rate for two-point BVPs. [5] constructed a simple collocation method by the Haar wavelets for the numerical solution of linear and nonlinear second-order BVPs with periodic boundary conditions. Erge [6] studied the quadratic/linear rational spline collocation method for linear BVPs. In [7], based on B-spline wavelets, the numerical solutions of nonlinear BVPs were derived. Pradip et al. used B-spline to Bratuis problem which is an important nonlinear BVPs in [8,9,10]. [11,12,13,14,15,16] solved BVPs by the reproducing kernel method. Based on the idea of least squares, Xu et al. [17,18,19] gave an effective algorithm in reproducing kernel space for solving fractional differential integral equations and interface problems.

    It is a common technique to use orthogonal polynomials to solve differential equations. In [20,21,22,23], the authors used Chebyshev-Galerkin scheme for the time-fractional diffusion equation. In [24], the authors developed Jacobi rational operational approach for time-fractional sub-diffusion equation on a semi-infinite domain. [25,26,27,28] developed multiscale orthonormal basis to solve BVPs with various boundary conditions, and the stability and convergence order were also discussed. Legendre wavelet is widely used in various fields, such as signal system, because of its good properties. In this paper, a multiscale function is constructed by using Legendre polynomials to solve the approximate solution of differential equations. We use the multiscale fine ability of Legendre wavelet to construct multiwavelet, which has better approximation than single wavelet. In addition, we improve Legendre wavelet for specific problems, and the improved one still has compact support. We know that for functions with compact support, the better the tight support, the more concentrated the energy. Moreover, in the calculation process, the calculation speed can be enhanced, and the error accumulation is low.

    The purpose of this paper is to construct a set of multiscale orthonormal basis with compact support based on Legendre wavelet to find the approximate solution of the boundary value problems:

    {u(x)+p(x)u(x)+q(x)u(x)=F(x,u),x(0,1),a1u(0)+b1u(1)+c1u(0)+d1u(1)=α1,a2u(0)+b2u(1)+c2u(0)+d2u(1)=α2, (1.1)

    where p(x) and q(x) are both smooth. ai,bi,ci,di,i=1,2 are constants. When F is just about the function of x, F(x,u)=f(x), Eq (1.1) is linear boundary value problem. According to [21], the nonlinear boundary value problem can be transformed into a linear boundary value problem by using Quasi-Newton method. So this paper mainly studies the case of F(x,u)=f(x), that is, the linear boundary value problem.

    As we all know, if the basis function has good properties, the approximate solution of the boundary value problem has good convergence, stability and so on. In [25], the orthonormal basis on [0, 1] was constructed by the compact support function to obtain the numerical solution of the boundary value problem. But the basis function is not compactly supported at [0, 1], and the approximating solution is linearly convergent. In this paper, based on the idea of wavelet, a set of orthonormal bases with compact support is constructed by using Legendre polynomials, and the approximate solution of the boundary value problem is obtained by using these bases. Based on the constructed orthonormal basis, the proposed algorithm has convergence and stability, and the convergence order of the algorithm is more than 2 orders.

    The purpose of this work is to deduce the numerical solutions of Eq (1.1). In Section 2, using wavelet theory, a set of multiscale orthonormal basis is presented by Legendre polynomials in W32[0,1]. The constructed basis is compactly supported. It is well known that the compact support performance generates sparse matrices during calculation, thus improving the convergence rate. The numerical method of ε-approximate solution is presented in Section 3. And Section 4 proves the convergence order of ε-approximate solution and stability. In Section 5, the proposed algorithm has been applied to some numerical experiments. Finally, we end with some conclusions in Section 6.

    Wu and Lin introduced the reproducing kernel space W12[0,1] and W32[0,1] [29]. Let

    W32,0[0,1]={u|u(0)=u(0)=u(0)=0,   uW32[0,1]}.

    Clearly, W32,0[0,1] is the closed subspace of W32[0,1].

    Legendre polynomials are mathematically important functions. This section constructs the orthonormal basis in W32[0,1] by Legendre polynomials. Legendre polynomials are known to be orthogonal on L2[1,1]. For convenience, we first compress Legendre's polynomials to [0,1], and get the following four functions:

    φ0(x)=1;  φ1(x)=3(1+2x);φ2(x)=5(16x+6x2);φ3(x)=7(1+12x30x2+20x3).

    By translating and weighting the above four functions, we can construct

    ψl(x)=3j=0(aljφj(2x)+bljφj(2x1)),l=0,1,2,3. (2.1)

    In application, we hope ψl(x) has good properties, for example, as many coefficients as zero and orthogonality, so ψl(x) needs to meet the following conditions

    10xjψl(x)dx=0,j=0,1,2,,l+3, (2.2)
    10ψi(x)ψj(x)dx=δij,i,j=0,1,2,3. (2.3)

    The coefficients alj,blj can be get by Eqs (2.2) and (2.3), immediately ψl(x) is as follows:

    ψ0(x)=1517{3+56x216x2+224x3,x[0,12],61296x+456x2224x3,x[12,1]. (2.4)
    ψ1(x)=121{11+270x1320x2+1680x3, x[0,12],619+2670x3720x2+1680x3,x[12,1]. (2.5)
    ψ2(x)=3517{1+30x174x2+256x3,x[0,12],111450x+594x2256x3,x[12,1]. (2.6)
    ψ3(x)=521{136x+246x2420x3,x[0,12],209804x+1014x2420x3,x[12,1]. (2.7)

    Through the ideas of the wavelet, scale transformation of the functions ψl(x) gets Legendre wavelet

    ψlik(x)=2i12ψl(2ixk),l=0,1,2,3;i=1,2,;k=0,1,,2i11.

    Clearly, ψlik(x) has compactly support in [k2i1,k+12i1]. Let

    Wi=span{ψlik(x)}3l=0,i=1,2,;k=0,1,,2i11.

    Then,

    L2[0,1]=V0i=1Wi,

    where

    V0={φ0(x),φ1(x),φ2(x),φ3(x)}.

    According to the above analysis, we can get the following theorem.

    Theorem 2.1.

    {ρj(x)}j=1={φ0(x),φ1(x),φ2(x),φ3(x),ψ010(x),ψ110(x),ψ210(x),ψ310(x),,ψ0ik(x),ψ1ik(x),ψ2ik(x),ψ3ik(x),}

    is the orthonormal basis in L2[0,1].

    Now we generate the orthonormal basis in W32,0[0,1] from the basis in L2[0,1]. Note

    J3u(x)=12x0(xt)2u(t)dt. (2.8)

    Theorem 2.2. {J3ρj(x)}j=1 is the orthonormal basis in W32,0[0,1].

    Proof. Only need to prove completeness and orthogonality. For uW32,0[0,1], if

    <u,J3ρj>W32,0=0,

    you can deduce u0, then {J3ρj(x)}j=1 are complete. In fact,

    <u,J3ρj>W32,0=<u,ρj>L2=10uρjdx=0. (2.9)

    From Theorem 2.1, u0. Due to uW32,0[0,1], u(0)=u(0)=u(0)=0, then, u0.

    According to Theorem 2.1 and Eq (2.9), orthonormal is obvious.

    Because of W32,0[0,1]W32[0,1] and three more conditions for W32[0,1] than W32,0[0,1]. So the orthonormal basis for W32[0,1] as follows:

    Theorem 2.3.

    {J3gj(x)}j=1={1,x,x22}{J3ρj(x)}j=1

    are the orthonormal basis in W32[0,1].

    Put L: W32[0,1]L2[0,1],

    Lu=u(x)+p(x)u(x)+q(x)u(x).

    L is a linear bounded operator in [27]. Let Bi: W32[0,1]R, and

    Biu=aiu(0)+biu(1)+ciu(0)+diu(1),i=1,2.

    The {Quasi-Newton} method is used to transform Eq (1.1) into a linear boundary value problem, and its operator equation is as follows:

    {Lu=f(x),B1u=α1,  B2u=α2. (3.1)

    Definition 3.1. uε is named ε-approximate solution for Eq (3.1), ε>0, if

    Luεf2L2+2i=1(Biuεαi)2<ε2.

    In [27], it is shown that ε-approximate solution for Eq (3.1) exists by the following theorem.

    Theorem 3.1. Equation (3.1) exists ε-approximate solution

    uεn(x)=nk=1ckJ3gk(x),

    where n is a natural number determined by ε, and ci satisfies

    nk=1ckLJ3gkLu2L2+2l=1(nk=1ckJ3gkBlu)2=minck{nk=1ckLJ3gkLu2L2+2l=1(nk=1ckJ3gkBlu)2}.

    To seek the ε-approximate solution, we just need ck. Let G be quadratic form about

    c=(c1,,cn)T,
    G(c1,,cn)=nk=1ckLJ3gkLu2L2+2l=1(nk=1ckJ3gkBlu)2. (3.2)

    From Theorem 3.1,

    c=(c1,,cn)T

    is the minimum point of G(c1,,cn). If L is reversible, the minimum point of G exists and is unique.

    In fact, the partial derivative of G(c1,,cn) with respect to cj:

    Gcj=2nk=1ckLJ3gk,LJ3gjL22LJ3gj,LuL2+22l=1(nk=1ckJ3gkJ3gjJ3gjBlu).

    Let

    cjG(c1,,cn)=0,

    so

    nk=1ckLJ3gk,LJ3gjL2+2nk=1ckJ3gkJ3gj=LJ3gj,LuL2+2l=1J3gjBlu. (3.3)

    Let An be the n-order matrix and bn be the n-dimensional vector, i.e.,

    An=(LJ3gk,LJ3gjL2+2J3gkJ3gj)n×n,bn=(LJ3gk,LuL2+2l=1J3gjBlu)n.

    Then Eq (3.3) changes to

    Anc=bn. (3.4)

    If L is invertible, Eq (3.4) has only one solution c, and c is minimum point of G. Equation (3.4) has an unique solution is proved as follows.

    Theorem 3.2. If L is invertible, Eq (3.3) has only one solution.

    Proof. The homogeneous linear equation of Eq (3.4) is

    nk=1ckLJ3gk,LJ3gjL2+2nk=1ckJ3gkJ3gj=0.

    Just prove that the above equation has an unique solution. Let cj(j=1,2,,n) multiply to both sides of the equation, and add all equations together so that

    nk=1ckLJ3gk,nj=1cjLJ3gjL2+2nk=1ckJ3gknj=1cjJ3gj=0.

    That is

    nk=1ckLJ3gkL2+2(nk=1ckJ3gk)2=0.

    Clearly,

    nk=1ckLJ3gk2L2=0,(nk=1ckJ3gk)2=0.

    Because J3gk is orthonormal basis, if L is invertible, ck=0. So Eq (3.3) has only one solution.

    Convergence and stability are important properties of algorithms. This section deals with the convergence and stability.

    In order to discuss the convergence, Theorem 4.1 is given as follows:

    Theorem 4.1. J3ψlik(x) is compactly supported in [k2i1,k+12i1].

    Proof. When

    x<k2i1,   J3ψlik(x)=0.

    When x>k+12i1, because of ψlik(x) with compact support, then,

    J3ψlik(x)=12x0(xt)2ψlik(t)dt=12k+12i1k2i1(xt)2ψlik(t)dt=2i32k+12i1k2i1(xt)2ψl(2i1tk)dt=25(i1)210(s2i1xk)2ψl(s)ds, s=2i1tk. (4.1)

    According to Eq (2.2), J3ψlik(x)=0. So J3ψlik(x) has compactly support in [k2i1,k+12i1].

    Note

    (J3ψli,k(x))=J2ψli,k(x), (J3ψli,k(x))=J1ψli,k(x).

    By referring to the proof of Theorem 4.1, J1ψlik(x) and J2ψlik(x) are compactly supported in [k2i1,k+12i1].

    The order of convergence will proceed below. Assume

    u(x)=2j=0cjxjj!+3j=0djJ3φj(x)+i=12i11k=03l=0(c(l)i,kJ3ψli,k), (4.2)

    where

    cj=<u,xjj!>W32,   dj=<u,φj(x)>W32,

    and

    c(l)i,k=<u,   J3ψli,k(x)>W32.

    And

    un(x)=2j=0cjxjj!+3j=0djJ3φj(x)+ni=12i11k=03l=0(c(l)i,kJ3ψli,k).

    Theorem 4.2. Assume uεn(x) is the ε-approximate solution of Eq (3.1). If u(m)(x) is bounded in [0,1], mN,3m7, then,

    |u(x)uεn(x)|2(m2)nM,

    here M is a constant.

    Proof. From Definition 3.1 and Theorem 3.1, we get

    |u(x)uεn(x)|M0uuεnW32M0L1L(uuεn)L2M0L1(L(uuεn)L2+|B1(uuεn)|+|B2(uuεn)|)M0L1(L(uun)L2+B1(uun)+B2(uun)).

    Obviously,

    B1(uun)=0,B2(uun)=0.

    That is

    |u(x)uεn(x)|M0L1L(uun)L2M0L1(10(L(uun))2dx)12M0L1(maxx[0,1]{|L(uun)|2})123M0L1M1maxx[0,1]{|uun|,|uun|,|uun|},

    where

    M1=maxx[0,1]{1,|p(x)|,|q(x)|}.

    We know

    |uun|=|i=n+12i11k=03l=0c(l)i,kJ3ψli,k(x)|i=n+12i11k=03l=0|c(l)i,k||J3ψli,k(x)|.

    By the compactly support of Jpψlik(x),p=1,2,3, fixed i, then Jpψlik(x)0 only in [k2i1,k+12i1],

    |uun|i=n+13l=0|c(l)i,k||J3ψli,k(x)|.

    Similarly,

    |uun|i=n+13l=0|c(l)i,k||J2ψli,k(x)|

    and

    |uun|i=n+13l=0|c(l)i,k||J1ψli,k(x)|.

    Through J1ψli,k(x),J2ψli,k(x) and J3ψli,k(x), you can get

    |u(x)uεn(x)|3M0M1L1|uun|.

    As |uun|, |c(l)i,k| and |J1ψli,k(x)| will be discussed below. We can get that |c(l)i,k| is related to u(m)(x). In fact,

    |c(l)i,k|=|<u,J3ψli,k(x)>W32|=|10(u(x))ψli,k(x)dx|=|k+12i1k2i1u(x)ψli,k(x)dx|. (4.3)

    Taylor's expansion of u(x) at k2i1 is

    u(x)=m1j=3u(j)(k2i1)(j3)!(xk2i1)j3+u(m)(ξ)(m3)!(xk2i1)m3, ξ[k2i1,k+12i1].

    Equation (4.3) is changed to

    |c(l)i,k|=|k+12i1k2i1(m1j=3u(j)(k2i1)(j3)!(xk2i1)j3+u(m)(ξ)(m3)!(xk2i1)m3)ψli,k(x)dx|=|m1j=3u(j)(k2i1)(j3)!k+12i1k2i1(xk2i1)j3ψlik(x)dx|+|k+12i1k2i1u(m)(ξ)(m3)!(xk2i1)m3)ψli,k(x)dx|,

    where

    k+12i1k2i1(xk2i1)j3ψlik(x)dx=2i12k+12i1k2i1(xk2i1)j3ψl(2i1xk)dxt=2i1xk=2(32j)(i1)210(t)j3ψl(t)dt.

    According to Eq (2.2),

    m1j=3u(j)(k2i1)(j3)!k+12i1k2i1(xk2i1)j3ψlik(x)dx=0,

    so

    |c(l)i,k|=|k+12i1k2i1u(m)(ξ)(m3)!(xk2i1)m3ψli,k(x)dx||u(m)(ξ)(m3)!|k+12i1k2i1|xk2i1|m3|ψli,k(x)|dx|u(m)(ξ)(m3)!|2(m3)(i1)k+12i1k2i1|ψli,k(x)|dx|u(m)(ξ)(m3)!|2(m3)(i1)2i12.

    Because u(m)(x) is bounded,

    |u(m)(ξ)(m3)!|M3,

    then,

    |c(l)i,k|2(2m5)(i1)2M3. (4.4)

    By the compactly support of J1ψlik(x),

    |J1ψli,k(x)|=|x0ψli,k(t)dt|k+12i1k2i1|ψli,k(t)|dt2i12.

    According to the above analysis,

    |uun|i=n+14M32(2m5)(i1)22(i1)2=4M32(m2)n.

    That is

    |u(x)uεn(x)|2(m2)nM,

    where M is a constant.

    Stability analysis is conducted below. According to the third section, the stability of the algorithm is related to the stability of Eq (3.4). By the following Property 4.1, the stability of the algorithm can be discussed by the number of conditions of the matrix A.

    Property 4.1. If the matrix A is symmetric and reversible, then

    cond(A)=|λmaxλmin|,

    where λmax and λmin are the largest and smallest eigenvalues of A respectively.

    In this paper,

    An=(aij)n×n=(LJ3gi,LJ3gjL2+2J3giJ3gj)n×n.

    Clearly, An is symmetric. From Theorem 3.2, An is reversible. In order to discuss the stability of the algorithm, only the eigenvalues of matrix An need to be discussed.

    Theorem 4.3. Assume uW32 and uW32=1. If L is an invertible differential operator, then,

    LuL21L1.

    Proof. Since L is an invertible, assume Lu=v, then u=L1v. Moreover

    1=uW32=L1vW32L1vL2.

    Then,

    vL21L1.

    That is,

    LuL21L1.

    Theorem 4.4. Let λ {be} the eigenvalues of matrix A of Eq (3.4), x=(x1,,xn)T is related eigenvalue of λ and x=1, then,

    λL2+2.

    Proof. By Ax=λx,

    λxi=nj=1aijxj=nj=1(LJ3gi,LJ3gjL2+2J3giJ3gj)xj=LJ3gi,nj=1xjLJ3gjL2+2J3ginj=1(J3gjxj),i=1,,n. (4.5)

    Let xi multiply to both sides of Eq (4.5), and then add the equations from j=1 to j=n together so that

    λ=λx2i=ni=1xiLJ3gi,nj=1xjLJ3gjL2+2ni=1(J3gixi)nj=1(J3gjxj)=ni=1xiLJ3gi2L2+2(ni=1(J3gixi))2L2ni=1x2i+2ni=1x2i=(L2+2)x. (4.6)

    Since

    x=1,   λL2+2.

    From Theorem 4.3 and Eq (4.6), we can get

    λni=1xiLJ3giL2=L(ni=1xiJ3gi)L21L1.

    Then,

    cond(A)=|λmaxλmin|L2+21L1=(L2+2)L1.

    That is the condition number of A is bounded, so the presented method is stable.

    This section discusses numerical examples to reveal the accuracy of the proposed algorithm. Examples 5.1 and 5.3 are linear and nonlinear BVPs respectively. Example 5.2 shows that our method also applies to Eq (1.1) with other linear boundary value conditions. In this paper, N is the number of bases, and

    N=7+4(2n1),n=1,2,.

    eN(x) is the absolute errors. C.R. and cond represent the convergence order and the condition number respectively. For convenience, we denote

    eN(x)=|u(x)uN(x)|

    and

    C.R.=log2max|eN(x)|max|eN+1(x)|.

    Example 5.1. Consider the test problem suggested in [28,30]

    {u=u+2u+4x2ex,x(0,1),u(0)=2,u(1)=e1,

    where the exact solution is u(x)=ex2x+1. The numerical results are shown in Table 1. It is clear from Table 1 that the present method produces a converging solution for different values. In addition, the results of the proposed algorithm in Table 1 are compared with those in [28,30]. Obviously, the proposed algorithm is better. Table 2 shows eN(x), C.R., cond and CPU time. The unit of CPU time is second, expressed as s.

    Table 1.  eN(x) of Example 5.1.
    x eN(x) of [30] e66(x) of [28] e35(x) e67(x)
    0 0 5.67e-9 9.94e-14 8.88e-16
    0.1 1.19e-5 3.35e-9 2.04e-13 2.22e-16
    0.2 4.18e-5 3.93e-10 2.16e-13 1.55e-15
    0.3 4.96e-5 1.33e-9 1.42e-13 8.88e-16
    0.4 6.04e-5 1.40e-9 1.68e-14 1.33e-15
    0.5 6.33e-5 1.82e-9 1.82e-13 4.44e-16
    0.6 6.23e-5 5.96e-9 1.52e-13 2.66e-15
    0.7 5.76e-5 1.14e-8 1.66e-13 8.88e-16
    0.8 4.23e-5 1.52e-8 4.36e-13 4.44e-15
    0.9 2.15e-5 1.66e-8 4.93e-13 4.44e-16
    1.0 0 1.90e-8 2.67e-13 4.44e-15

     | Show Table
    DownLoad: CSV
    Table 2.  eN(x), C.R. and cond of Example 5.1.
    n N maxeN(x) C.R. cond CPU(s)
    1 11 1.66e-8 274.262 2.57
    2 19 6.85e-11 7.92 274.262 7.89
    3 35 6.55e-13 6.71 274.262 24.42
    4 67 7.93e-15 6.40 274.262 82.73

     | Show Table
    DownLoad: CSV

    Example 5.2. Consider the problem suggested in [25,28].

    {u+u+xu=f(x),x(0,1),u(0)=2,u(1)+u(12)=sin12+sin1.

    The exact solution is u(x)=sinx, and f(x)=cosxsinx+xsinx. This problem is the boundary value problem with the multipoint boundary value conditions. Table 3 shows maximum absolute error MEn, C.R. and cond., which compared with the other algorithms, the results obtained demonstrate that our algorithm is remarkably effective. The numerical errors are provided in Figures 1 and 2, also show a good accuracy.

    Table 3.  MEn, C.R. and cond of Example 5.2.
    The present method [25] [28]
    n MEn C.R. cond n MEn C.R. cond n MEn C.R. cond
    11 3.73e-9 195.05 10 6.34e-6 3.96 182.06 11 1.88e-4
    19 2.97e-11 6.97 195.05 18 4.04e-7 3.98 182.06 19 5.99e-5 1.65 1.49×106
    35 2.13e-13 7.12 195.05 34 2.54e-8 4.06 182.06 35 1.84e-5 1.70 3.74×108
    67 2.77e-15 6.27 195.05 66 1.59e-9 3.95 182.06 67 4.62e-6 1.99 8.87×1010

     | Show Table
    DownLoad: CSV
    Figure 1.  eN(x) of Example 5.3 (n = 35).
    Figure 2.  eN(x) of Example 5.3 (n = 67).

    Example 5.3. Consider a nonlinear problem suggested in [7,9]

    {u+λeu=0,x(0,1)u(0)=0,u(1)=0,

    where

    u(x)=2ln(cosh((x12)(θ/2))/cosh(θ/4)),

    and θ satisfies

    θ2λcosh(θ/4)=0.

    This is the second-order nonlinear Bratu problem. { Bratu equation is widely used in engineering fields, such as spark discharge, semiconductor manufacturing, etc. In the field of physics, the Bratu equation is used to describe the physical properties of microcrystalline silica gel solar energy. In the biological field, the Bratu equation is used to describe the kinetic model of some biochemical reactions in living organisms.} To this problem, taking u0(x)=x(1x),k=3, where k is the number of iterations of the algorithm mentioned in [27]. when λ=1,λ=2, eN(x) are listed in Tables 4 and 5, respectively.

    Table 4.  eN(x) of Example 5.3 (λ=1).
    x eN(x) of [8] eN(x) of [9] eN(x)
    0 0 0 4.4959e11
    0.2 1.4958e9 2.4390e5 4.1096e11
    0.4 2.7218e9 4.2096e5 7.1502e12
    0.6 2.7218e9 4.2096e5 7.1483e12
    0.8 1.4958e9 2.4390e5 4.1104e11

     | Show Table
    DownLoad: CSV
    Table 5.  eN(x) of Example 5.3 (λ=2).
    x eN(x) of [7] eN(x) of [9] eN(x)
    0 5.8988e26 0 1.1801e12
    0.2 1.3070e7 6.9297e5 2.5646e10
    0.4 1.4681e7 1.0775e4 1.6666e9
    0.6 1.4681e7 1.0775e4 1.6666e9
    0.8 1.3070e7 6.9297e5 2.5646e10

     | Show Table
    DownLoad: CSV

    In this paper, based on Legendre's polynomials, we construct orthonormal basis in L2[0,1] and W32[0,1], respectively. It proves that this group of bases is orthonormal and compactly supported. According to the orthogonality of the basis, we present an algorithm to obtain the approximate solution of the boundary value problems. Using the compact support of the basis, we prove that the convergence order of the presented method related to the boundedness of u(m)(x). Finally, three numerical examples show that the absolute error and convergence order of the algorithm are better than other methods.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This study was supported by National Natural Science Funds of China by Grant number (12101164), Characteristic Innovative Scientific Research Project of Guangdong Province (2023KTSCX181, 2023KTSCX183) and Basic and Applied Basic Research Project Zhuhai City (ZH24017003200026PWC).

    The authors have no conflicts of interest to declare.


    Acknowledgments



    We wish to express our sincere gratitude and appreciation to the DSI–CSIR Inter-bursary Support (IBS) Programme for financial support. The data used in this study are publicly available and comes from benchmark data and do not raise any ethical issues. The data that supports the findings of the study is available upon reasonable request from the corresponding author.

    Conflict of interest



    The authors declare no conflict of interest.

    [1] Cruz CSD, Tanoue LT, Matthay RA (2011) Lung cancer: epidemiology, etiology, and prevention. Clin Chest Med 32: 605-644. https://doi.org/10.1016/j.ccm.2011.09.001
    [2] Bray F, Ferlay J, Soerjomataram I, et al. (2018) Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin 68: 394-424. https://doi.org/10.3322/caac.21492
    [3] Schabath MB, Cote ML (2019) Cancer progress and priorities: lung cancer. Cancer Epidemiol Biomarkers Prev 28: 1563-1579. https://doi.org/10.1158/1055-9965.EPI-19-0221
    [4] Yoon SM, Shaikh T, Hallman M (2017) Therapeutic management options for stage III non-small cell lung cancer. World J Clin Oncol 8: 1-20. https://doi.org/10.5306/wjco.v8.i1.1
    [5] Hoffman RM, Atallah RP, Struble RD, et al. (2020) Lung cancer screening with low-dose CT: a meta-analysis. J Gen Intern Med 35: 3015-3025. https://doi.org/10.1007/s11606-020-05951-7
    [6] Silvestri GA, Goldman L, Tanner NT, et al. (2023) Outcomes from more than 1 million people screened for lung cancer with low-dose CT imaging. Chest 164: 241-251. https://doi.org/10.1016/j.chest.2023.02.003
    [7] Shin HJ, Kim MS, Kho BG, et al. (2020) Delayed diagnosis of lung cancer due to misdiagnosis as worsening of sarcoidosis: a case report. BMC Pulm Med 20: 1-4. https://doi.org/10.1186/s12890-020-1105-2
    [8] Del Ciello A, Franchi P, Contegiacomo A, et al. (2017) Missed lung cancer: when, where, and why?. Diagn Interv Radiol 23: 118-126. https://doi.org/10.5152/dir.2016.16187
    [9] Friedland B (2009) Medicolegal issues related to cone beam CT. Semin Orthod 15: 77-84. https://doi.org/10.1053/j.sodo.2008.09.010
    [10] Krupinski EA, Berbaum KS, Caldwell RT, et al. (2010) Long radiology workdays reduce detection and accommodation accuracy. J Am Coll Radiol 7: 698-704. https://doi.org/10.1016/j.jacr.2010.03.004
    [11] Abujudeh HH, Boland GW, Kaewlai R, et al. (2010) Abdominal and pelvic computed tomography (CT) interpretation: discrepancy rates among experienced radiologists. Eur Radiol 20: 1952-1957. https://doi.org/10.1007/s00330-010-1763-1
    [12] Jacobsen MM, Silverstein SC, Quinn M, et al. (2017) Timeliness of access to lung cancer diagnosis and treatment: a scoping literature review. Lung cancer 112: 156-164. https://doi.org/10.1016/j.lungcan.2017.08.011
    [13] Sathyakumar K, Munoz M, Singh J, et al. (2020) Automated lung cancer detection using artificial intelligence (AI) deep convolutional neural networks: a narrative literature review. Cureus 12: e10017. https://doi.org/10.7759/cureus.10017
    [14] Huang S, Yang J, Shen N, et al. (2023) Artificial intelligence in lung cancer diagnosis and prognosis: current application and future perspective. Semin Cancer Biol 89: 30-37. https://doi.org/10.1016/j.semcancer.2023.01.006
    [15] Ardila D, Kiraly AP, Bharadwaj S, et al. (2019) End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nat Med 25: 954-961. https://doi.org/10.1038/s41591-019-0447-x
    [16] Alrahhal MS, Alqhtani E (2021) Deep learning-based system for detection of lung cancer using fusion of features. Int J Comput Sci Mobile Comput 10: 57-67. https://doi.org/10.47760/ijcsmc.2021.v10i02.009
    [17] Huang X, Shan J, Vaidya V (2017) Lung nodule detection in CT using 3D convolutional neural networks. 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017) : 379-383. https://doi.org/10.1109/ISBI.2017.7950542
    [18] Guo Y, Song Q, Jiang M, et al. (2021) Histological subtypes classification of lung cancers on CT images using 3D deep learning and radiomics. Acad Radiol 28: e258-e266. https://doi.org/10.1016/j.acra.2020.06.010
    [19] Cui X, Zheng S, Heuvelmans MA, et al. (2022) Performance of a deep learning-based lung nodule detection system as an alternative reader in a Chinese lung cancer screening program. Eur J Radiol 146: 110068. https://doi.org/10.1016/j.ejrad.2021.110068
    [20] Dunn B, Pierobon M, Wei Q (2023) Automated classification of lung cancer subtypes using deep learning and CT-scan based radiomic analysis. Bioengineering 10: 690. https://doi.org/10.3390/bioengineering10060690
    [21] de Margerie-Mellon C, Chassagnon G (2023) Artificial intelligence: a critical review of applications for lung nodule and lung cancer. Diagn Interv Imaging 104: 11-17. https://doi.org/10.1016/j.diii.2022.11.007
    [22] Ahmed SF, Alam MSB, Hassan M, et al. (2023) Deep learning modelling techniques: current progress, applications, advantages, and challenges. Artif Intell Rev 56: 13521-13617. https://doi.org/10.1007/s10462-023-10466-8
    [23] Zhang J, Xia Y, Zeng H, et al. (2018) NODULe: combining constrained multi-scale LoG filters with densely dilated 3D deep convolutional neural network for pulmonary nodule detection. Neurocomputing 317: 159-167. https://doi.org/10.1016/j.neucom.2018.08.022
    [24] AL-Huseiny MS, Sajit AS (2021) Transfer learning with GoogLeNet for detection of lung cancer. Indones J Electr Eng Comput Sci 22: 1078-1086. https://doi.org/10.11591/ijeecs.v22.i2.pp1078-1086
    [25] Sakshiwala, Singh MP (2023) A new framework for multi-scale CNN-based malignancy classification of pulmonary lung nodules. J Ambient Intell Human Comput 14: 4675-4683. https://doi.org/10.1007/s12652-022-04368-w
    [26] Mamun M, Mahmud MI, Meherin M, et al. (2023) LCDctCNN: lung cancer diagnosis of CT scan images using CNN based model. 2023 10th International Conference on Signal Processing and Integrated Networks (SPIN) : 205-212. https://doi.org/10.1109/SPIN57001.2023.10116075
    [27] Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 25.
    [28] Hua KL, Hsu CH, Hidayati SC, et al. (2015) Computer-aided classification of lung nodules on computed tomography images via deep learning technique. Onco Targets Ther 8: 2015-2022. https://doi.org/10.2147/OTT.S80733
    [29] Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W., Frangi, A. Eds. Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015. Springer International Publishing, 234-241. https://doi.org/10.1007/978-3-319-24574-4_28
    [30] Gulshan V, Peng L, Coram M, et al. (2016) Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316: 2402-2410. https://doi.org/10.1001/jama.2016.17216
    [31] Kaul V, Enslin S, Gross SA (2020) History of artificial intelligence in medicine. Gastrointest Endosc 92: 807-812. https://doi.org/10.1016/j.gie.2020.06.040
    [32] Esteva A, Kuprel B, Novoa RA, et al. (2017) Dermatologist-level classification of skin cancer with deep neural networks. Nature 542: 115-118. https://doi.org/10.1038/nature21056
    [33] Saouli R, Akil M, Kachouri R (2018) Fully automatic brain tumor segmentation using end-to-end incremental deep neural networks in MRI images. Comput Methods Programs Biomed 166: 39-49. https://doi.org/10.1016/j.cmpb.2018.09.007
    [34] Lorenzo PR, Nalepa J, Bobek-Billewicz B, et al. (2019) Segmenting brain tumors from FLAIR MRI using fully convolutional neural networks. Comput Methods Programs Biomed 176: 135-148. https://doi.org/10.1016/j.cmpb.2019.05.006
    [35] Moitra D, Mandal RK (2019) Automated AJCC staging of non-small cell lung cancer (NSCLC) using deep convolutional neural network (CNN) and recurrent neural network (RNN). Health Inf Sci Syst 7: 1-12. https://doi.org/10.1007/s13755-019-0077-1
    [36] Chaunzwa TL, Hosny A, Xu Y, et al. (2021) Deep learning classification of lung cancer histology using CT images. Sci Rep 11: 1-12. https://doi.org/10.1038/s41598-021-84630-x
    [37] Anari S, Tataei Sarshar N, Mahjoori N, et al. (2022) Review of deep learning approaches for thyroid cancer diagnosis. Math Probl Eng 2022: 5052435. https://doi.org/10.1155/2022/5052435
    [38] Painuli D, Bhardwaj S, Köse U (2022) Recent advancement in cancer diagnosis using machine learning and deep learning techniques: a comprehensive review. Comput Biol Med 146: 105580. https://doi.org/10.1016/j.compbiomed.2022.105580
    [39] Hosseini SH, Monsefi R, Shadroo S (2024) Deep learning applications for lung cancer diagnosis: a systematic review. Multimed Tools Appl 83: 14305-14335. https://doi.org/10.1007/s11042-023-16046-w
    [40] Aamir M, Rahman Z, Abro WA, et al. (2023) Brain tumor classification utilizing deep features derived from high-quality regions in MRI images. Biomed Signal Proces 85: 104988. https://doi.org/10.1016/j.bspc.2023.104988
    [41] Sun M, Song Z, Jiang X, et al. (2017) Learning pooling for convolutional neural network. Neurocomputing 224: 96-104. https://doi.org/10.1016/j.neucom.2016.10.049
    [42] Kumar RL, Kakarla J, Isunuri BV, et al. (2021) Multi-class brain tumor classification using residual network and global average pooling. Multimed Tools Appl 80: 13429-13438. https://doi.org/10.1007/s11042-020-10335-4
    [43] Zafar A, Aamir M, Mohd Nawi N, et al. (2022) A comparison of pooling methods for convolutional neural networks. Appl Sci 12: 8643. https://doi.org/10.3390/app12178643
    [44] Nirthika R, Manivannan S, Ramanan A, et al. (2022) Pooling in convolutional neural networks for medical image analysis: a survey and an empirical study. Neural Comput Appl 34: 5321-5347. https://doi.org/10.1007/s00521-022-06953-8
    [45] Xiong W, Zhang L, Du B, et al. (2017) Combining local and global: rich and robust feature pooling for visual recognition. Pattern Recogn 62: 225-235. https://doi.org/10.1016/j.patcog.2016.08.006
    [46] Dogan Y (2023) A new global pooling method for deep neural networks: Global average of top-k max-pooling. Trait Signal 40: 577-587. https://doi.org/10.18280/ts.400216
    [47] Nirthika R, Manivannan S, Ramanan A, et al. (2022) Pooling in convolutional neural networks for medical image analysis: a survey and an empirical study. Neural Comput Applic 34: 5321-5347. https://doi.org/10.1007/s00521-022-06953-8
    [48] Abuqaddom I, Mahafzah BA, Faris H (2021) Oriented stochastic loss descent algorithm to train very deep multi-layer neural networks without vanishing gradients. Knowledge-Based Systems 230: 107391. https://doi.org/10.1016/j.knosys.2021.107391
    [49] LeCun Y, Bottou L, Bengio Y, et al. (1998) Gradient-based learning applied to document recognition. Proc IEEE 86: 2278-2324. https://doi.org/10.1109/5.726791
    [50] Sabri N, Hamed HNA, Ibrahim Z, et al. (2020) A comparison between average and max-pooling in convolutional neural network for scoliosis classification. Int J 9. https://doi.org/10.30534/ijatcse/2020/9791.42020
    [51] Hyun J, Seong H, Kim E (2021) Universal pooling–a new pooling method for convolutional neural networks. Expert Syst Appl 180: 115084. https://doi.org/10.1016/j.eswa.2021.115084
    [52] Özdemir C (2023) Avg-topk: a new pooling method for convolutional neural networks. Expert Syst Appl 223: 119892. https://doi.org/10.1016/j.eswa.2023.119892
    [53] Zhou Q, Qu Z, Cao C (2021) Mixed pooling and richer attention feature fusion for crack detection. Pattern Recogn Lett 145: 96-102. https://doi.org/10.1016/j.patrec.2021.02.005
    [54] Hou Q, Zhang L, Cheng MM, et al. (2020) Strip pooling: Rethinking spatial pooling for scene parsing. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition : 4003-4012.
    [55] Kareem HF, AL-Husieny MS, Mohsen FY, et al. (2021) Evaluation of SVM performance in the detection of lung cancer in marked CT scan dataset. Indones J Electr Eng Comput Sci 21: 1731-1738. https://doi.org/10.11591/ijeecs.v21.i3.pp1731-1738
    [56] Alyasriy H (2020) The IQ-OTHNCCD lung cancer dataset. Mendeley Data . Available from: https://data.mendeley.com/datasets/bhmdr45bh2/1. Retrieved June 15, 2024
    [57] Hany M  (2020) Chest CT-scan images dataset. Available from: https://www.kaggle.com/datasets/mohamedhanyyy/chest-ctscan-images. Retrieved June 15, 2024
    [58] He K, Zhang X, Ren S, et al. (2016) Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition : 770-778.
    [59] Szegedy C, Liu W, Jia Y, et al. (2015) Going deeper with convolutions. Proceedings of the IEEE conference on computer vision and pattern recognition : 1-9.
    [60] Paszke A, Gross S, Massa F, et al. (2019) Pytorch: an imperative style, high-performance deep learning library. Adv Neural Inf Process Syst 32.
  • This article has been cited by:

    1. Mohammed Elamine Beroudj, Abdelaziz Mennouni, Carlo Cattani, Hermite solution for a new fractional inverse differential problem, 2024, 0170-4214, 10.1002/mma.10516
    2. 苗苗 宋, Inverse Problem of Option Drift Rate Based on Degenerate Parabolic Equations, 2023, 12, 2324-7991, 3814, 10.12677/AAM.2023.129375
    3. Yilihamujiang Yimamu, Zui-Cha Deng, C. N. Sam, Y. C. Hon, Total variation regularization analysis for inverse volatility option pricing problem, 2024, 101, 0020-7160, 483, 10.1080/00207160.2024.2345660
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(561) PDF downloads(50) Cited by(1)

Figures and Tables

Figures(2)  /  Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog