Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

MRI-based model for accurate prediction of P53 gene status in gliomas

  • Received: 27 December 2023 Revised: 29 March 2024 Accepted: 12 April 2024 Published: 25 April 2024
  • The accurate diagnosis and treatment of gliomas depends largely on the understanding of the P53 gene status. In our study, we presented a robust deep learning model, CTD-RegNet (improved RegNet integrating CNN, vision transformer, and truth discovery), tailored for predicting P53 gene status in gliomas. Our model addressed common challenges of existing deep learning models, such as incomplete feature extraction and uncertainty. First, the model used the RegNet network as a basis for predicting P53 gene mutations by skillfully extracting heterogeneous features. Next, the RegNet network was enhanced by integrating the CNN and ViT modules to optimise feature extraction and computational efficiency. Finally, using the truth discovery algorithm, we iteratively refined model uncertainties, thereby improving prediction accuracy. Our experiments demonstrated the effectiveness of the CTD-RegNet model, achieving an impressive accuracy of 95.57% and an AUC score of 0.9789, outperforming existing P53 gene status prediction models. The non-invasive nature of our model minimised the economic burden and physical and psychological stress on patients, while providing critical insights for accurate clinical diagnosis and treatment of gliomas.

    Citation: Yulin Zhao, Fengning Liang, Yaru Cao, Teng Zhao, Lin Wang, Jinhui Xu, Hong Zhu. MRI-based model for accurate prediction of P53 gene status in gliomas[J]. Electronic Research Archive, 2024, 32(5): 3113-3129. doi: 10.3934/era.2024142

    Related Papers:

    [1] Jinheng Liu, Kemei Zhang, Xue-Jun Xie . The existence of solutions of Hadamard fractional differential equations with integral and discrete boundary conditions on infinite interval. Electronic Research Archive, 2024, 32(4): 2286-2309. doi: 10.3934/era.2024104
    [2] Chenghua Gao, Enming Yang, Huijuan Li . Solutions to a discrete resonance problem with eigenparameter-dependent boundary conditions. Electronic Research Archive, 2024, 32(3): 1692-1707. doi: 10.3934/era.2024077
    [3] Liyan Wang, Jihong Shen, Kun Chi, Bin Ge . On a class of double phase problem with nonlinear boundary conditions. Electronic Research Archive, 2023, 31(1): 386-400. doi: 10.3934/era.2023019
    [4] Yuhua Long, Huan Zhang . Existence and multiplicity of nontrivial solutions to discrete elliptic Dirichlet problems. Electronic Research Archive, 2022, 30(7): 2681-2699. doi: 10.3934/era.2022137
    [5] Limin Guo, Weihua Wang, Cheng Li, Jingbo Zhao, Dandan Min . Existence results for a class of nonlinear singular p-Laplacian Hadamard fractional differential equations. Electronic Research Archive, 2024, 32(2): 928-944. doi: 10.3934/era.2024045
    [6] Meng Yan, Tingting Zhang . Existence of nodal solutions of nonlinear Lidstone boundary value problems. Electronic Research Archive, 2024, 32(9): 5542-5556. doi: 10.3934/era.2024256
    [7] Keyu Zhang, Qian Sun, Jiafa Xu . Nontrivial solutions for a Hadamard fractional integral boundary value problem. Electronic Research Archive, 2024, 32(3): 2120-2136. doi: 10.3934/era.2024096
    [8] N. Bazarra, J. R. Fernández, R. Quintanilla . A dual-phase-lag porous-thermoelastic problem with microtemperatures. Electronic Research Archive, 2022, 30(4): 1236-1262. doi: 10.3934/era.2022065
    [9] Wei Zhang, Jinbo Ni . New sharp estimates of the interval length of the uniqueness results for several two-point fractional boundary value problems. Electronic Research Archive, 2023, 31(3): 1253-1270. doi: 10.3934/era.2023064
    [10] Meiyu Sui, Yejuan Wang, Peter E. Kloeden . Pullback attractors for stochastic recurrent neural networks with discrete and distributed delays. Electronic Research Archive, 2021, 29(2): 2187-2221. doi: 10.3934/era.2020112
  • The accurate diagnosis and treatment of gliomas depends largely on the understanding of the P53 gene status. In our study, we presented a robust deep learning model, CTD-RegNet (improved RegNet integrating CNN, vision transformer, and truth discovery), tailored for predicting P53 gene status in gliomas. Our model addressed common challenges of existing deep learning models, such as incomplete feature extraction and uncertainty. First, the model used the RegNet network as a basis for predicting P53 gene mutations by skillfully extracting heterogeneous features. Next, the RegNet network was enhanced by integrating the CNN and ViT modules to optimise feature extraction and computational efficiency. Finally, using the truth discovery algorithm, we iteratively refined model uncertainties, thereby improving prediction accuracy. Our experiments demonstrated the effectiveness of the CTD-RegNet model, achieving an impressive accuracy of 95.57% and an AUC score of 0.9789, outperforming existing P53 gene status prediction models. The non-invasive nature of our model minimised the economic burden and physical and psychological stress on patients, while providing critical insights for accurate clinical diagnosis and treatment of gliomas.



    The fractional diffusion equation (FDE), which is obtained by replacing the first-order time derivative and/or second-order space derivative in the standard diffusion equation by a generalized derivative of fractional order respectively, were successfully used for modelling relevant physical processes, see [1,5,14,15,21]. Recently the research on inverse problems connected with fractional derivatives becomes more and more popular. Since Cheng in [2] studied an inverse problem on fractional diffusion equation, many topics are well discussed [11,12,23,26,27]. For the problem of fractional numerical differentiation, in [17,18], the authors give different regularization methods. In [13,26,30], some inverse source problems for fractional diffusion equations are considered. In [25,28], Liu et al consider the backward time-fractional diffusion problem. In [4,8,10,16], many results on inverse coefficient problems are established. For more reference on inverse problems for fractional diffusion equations, please consult the survey paper [9]. However, in some situation of anomalous diffusion, the diffusion indexes and the diffusion coefficient are unknown. This leads to determination of coefficients which is a classical inverse problem. It should be mentioned that most of the existing literature investigate the determination of only one unknown parameter or functions. However, in many practical situations, one wishes to simultaneously reconstruct more than one physical parameters. To the authors' knowledge, there are few works on this aspect. For examples, in [4], the fractional order α in tDαu(x,t)=u(x,t) is determined by an analytic method. In [24], the fractional orders α,β in tDαu(x,t)=()β2u(x,t) are reconstructed by the classical Levenberg-Marquardt method based on disrete least squares functional. The simultaneous inversion for the fractional order α and the space-dependent diffusion coefficient has been considered in [2,10]. In this paper, we reconstruct three important parameters in the time-space fractional diffusion equation from only one boundary measurement.

    Let us consider the time and space-symmetric fractional diffusion equation in one-dimensional space

    tDαu(x,t)=κ()β2u(x,t),0tT,0xL, (1.1)

    subject to homogeneous Neumann boundary conditions

    ux(0,t)=ux(L,t)=0 (1.2)

    and the initial condition

    u(x,0)=f(x), (1.3)

    where u is a solute concentration, κ>0 represents the diffusion coefficient. tDα is the Caputo time fractional derivative of order α (0<α<1) with the starting point at t=0 defined as follows [19]:

    tDαu(x,t)=1Γ(1α)t0u(x,η)ηdη(tη)α.

    The symmetric-space fractional derivative ()β2 of order β (1<β2) is defined by [3,6,7]. For readability, we reproduce the following definition for ()β2, 1<β2:

    Definition 1. [29] Suppose the Laplace operator has a complete set of orthonormal eigenfunctions φn corresponding to eigenvalues λ2n on a bounded domain D, i.e., ()φn=λ2nφn on a bounded domain D, B(φ)=0 on D is one of the standard three homogeneous boundary conditions. Let

    Gγ={f=n=1cnφn,cn=(f,φn),n=1|cn|2|φn|γ<,γ=max{β,0}},

    then for any fGγ: ()β2 is defined by

    ()β2f=n=1cn(λ2n)β2φn. (1.4)

    In the case of α=1,β=2, Eq (1.1) reduces to the classical diffusion equation. For 0<α<1,β=2, Eq (1.1) models subdiffusion due to particles having long-tailed resting times. For α=1,1<β<2, Eq (1.1) corresponds to the Lévy process. Hence the solution of (1.1) is important for describing the competition between these two anomalous diffusion processes.

    If α,β,κ,f(x) are given, the solution for the direct problem of Eqs (1.1)–(1.3) can be obtained analytically by the method of separation of variables: Setting u(x,t)=X(x)T(t) and substituting into (1.1) yields

    tDαX(x)T(t)+κ()β2X(x)T(t)=0.

    Letting ω is the separation constant, we obtain two fractional ordinary linear differential equations for X(x) and T(t) as

    ()β2X(x)ωX(x)=0; (1.5)
    tDαT(t)+κωT(t)=0, (1.6)

    respectively. Following the definition of fractional Laplacian ()β2 defined on a bounded domain, Eq (1.5) can be expressed as

    n=1cn(λ2n)β2xn+ωnn=1cnxn=0. (1.7)

    Hence under homogeneous Neumann conditions, the eigenvalues are ωn=λβn=(nπL)β(1<β2) for n=0,1,2, and the corresponding eigenfunctions are xn=cos(nπxL). Finally, the analytical solution of Eqs (1.1)–(1.3) is

    u(x,t)=n=0Tn(t)cos(nπxL)=12f0+n=1Eα,1(κ(nπL)βtα)fncos(nπxL), (1.8)

    where

    fn=2LL0f(x)cos(nπxL)dx,n=0,1,2,.

    Here we have used the result Eα,1(0)=1, where Eα,β(z) is the Mittag-Leffler function defined by

    Eα,β(z)=k=0zkΓ(αk+β),zC. (1.9)

    By a similar method to [22], we can prove that (1.8) certainly gives the weak solution to Eqs (1.1)–(1.3).

    Consider the following inverse problem:

    Given g(t):=u(0,t) (or ˜g(t):=u(L,t)) with unknown α,β,κ, we want to recover the orders α,β and the coefficient κ from the data g(t) (or ˜g(t)).

    Usually g(t) is measured and only available data on g(t) is its perturbation gδ(t), we assume that there exists a known noise level δ such that

    g()gδ()δ,

    where the norm denotes L2-norm.

    In this paper, our main work is to give the uniqueness result on determination of α,β,κ from the data g(t) and two numerical methods for solving the inverse problems. Although in the paper [24], the authors give a uniqueness result on a similar problem, the result holds only for 0<α<1/2 and sufficiently large T. Our result do not require this restriction and holds for 0<α<1 and a finite T. This is done by adding some more smoothness assumption on the initial data f(x).

    Throughout this paper, sometimes we denote the solution of the problem as u(x,t)=u(α,β,κ,x,t) to show its dependence on α,β,κ.

    Now from (1.8), we have the relationship:

    g(t):=u(0,t)=12f0+n=1Eα,1(κ(nπL)βtα)fn. (2.1)

    The uniqueness of the inverse problem is stated as follows:

    Theorem 1. Suppose that u1(α1,β1,κ1;x,t) and u2(α2,β2,κ2;x,t) represent the solutions of the inverse problem with α=α1,β=β1,κ=κ1 and α=α2,β=β2,κ=κ2 respectively. We assume that the initial data satisfies

    fH4(0,L),f(0)<0,f(0)=f(L)=0(Comapatiblecondition), (2.2)
    fn>0,n=0,1,2,. (2.3)

    If u1(α1,β1,κ1;0,t)=u2(α2,β2,κ2;0,t) (0<t<T), then

    α1=α2,β1=β2,κ1=κ2.

    Proof. From Eq (2.1), we have

    u1(α1,β1,κ1;0,t)=12f0+n=1Eα1,1(κ(nπL)β1tα1)fn,u2(α2,β2,κ2;0,t)=12f0+n=1Eα2,1(κ(nπL)β2tα2)fn.

    By assumption u1(α1,β1,κ1;0,t)=u2(α2,β2,κ2;0,t), it yields that

    n=1Eα1,1(κ1(nπL)β1tα1)fn=n=1Eα2,1(κ2(nπL)β2tα2)fn, (2.4)

    for 0<t<T.

    As an initial step, we will prove that α1=α2.

    By the definition of Mittag-Leffler function, we have

    Eα1,1(κ1(nπL)β1tα1)=1+1Γ(α1+1)(κ1(nπL)β1tα1)+k=2(κ1(nπL)β1tα1)kΓ(α1k+1)=11Γ(α1+1)(κ1(nπL)β1tα1)+t2α1(κ1(nπL)β1)2k=2(κ1(nπL)β1tα1)(k2)Γ(α1(k2)+2α1+1)=11Γ(α1+1)(κ1(nπL)β1tα1)+t2α1(κ1(nπL)β1)2Eα1,2α1+1(κ1(nπL)β1tα1). (2.5)

    And by the same argument, we obtain

    Eα2,1(κ2(nπL)β2tα2)=11Γ(α2+1)(κ2(nπL)β2tα2)+t2α2(κ2(nπL)β2)2Eα2,2α2+1(κ2(nπL)β2tα2). (2.6)

    From Eq (2.4), the following result

    n=1fntα1Γ(α1+1)n=1κ1(nπL)β1fn+t2α1n=1(κ1(nπL)β1)2Eα1,2α1+1(κ1(nπL)β1tα1)fn=n=1fntα2Γ(α2+1)n=1κ2(nπL)β2fn+t2α2n=1(κ2(nπL)β2)2Eα2,2α2+1(κ2(nπL)β2tα2)fn, (2.7)

    holds true. Now, we need to estimate the terms:

    t2α1n=1(κ1(nπL)β1)2Eα1,2α1+1(κ1(nπL)β1tα1)fn

    and

    t2α2n=1(κ2(nπL)β2)2Eα2,2α2+1(κ2(nπL)β2tα2)fn.

    According to the inequality

    |Eα,β(η)|C01+|η|,ηC,|argη|<π2+δ0,0<α<2,β>0, (2.8)

    where δ0,C0 are some positive constants, we get

    |t2α1n=1(κ1(nπL)β1)2Eα1,2α1+1(κ1(nπL)β1tα1)fn|t2α1n=1(κ1(nπL)β1)2|Eα1,2α1+1(κ1(nπL)β1tα1)||fn|C0t2α1n=1(κ1(nπL)β1)211+κ1(nπL)β1tα1|fn|. (2.9)

    Furthermore, we recall that f(x)H4(0,L), then

    f(4)(x)L2(0,L)=n=0fn(nπL)4cos(nπxL)L2(0,L)=C1(n=0|fn|2(nπL)8)12<, (2.10)

    where C1 is a constant independent on n.

    For the term on the right-side hand of Eq (2.9),

    t2α1n=1(κ1(nπL)β1)211+κ1(nπL)β1tα1|fn|=tα1+εn=1(κ1(nπL)β1)2κ1(nπL)β1ϵ0κ1(nπL)β1ϵ0tα1ε1+κ1(nπL)β1tα1|fn|, (2.11)

    holds, when 0<ϵ0<3/2 and 0<ε<α1. Since

    supnN,t0κ1(nπL)β1ϵ0tα1ε1+κ1(nπL)β1tα1:=C2<,

    then

    t2α1n=1(κ1(nπL)β1)211+κ1(nπL)β1tα1|fn|C2tα1+εn=1κ1(nπL)β1+ϵ0|fn|. (2.12)

    On the other hand, by Cauchy-Schwarz inequality

    n=1κ1(nπL)β1+ϵ0|fn|=n=1κ1(nπL)β1+ϵ04(nπL)4|fn|(n=1(nπL)8|fn|2)12(n=1κ21(nπL)2(β1+ϵ04))12. (2.13)

    Noting that ϵ0<3/2 and 1<β1<2, then 2(1+ϵ04)<2(β1+ϵ04)<1. Combining (2.10), we can conclude that

    n=1κ1(nπL)β1+ϵ0|fn|<. (2.14)

    Therefore,

    |t2α1n=1(κ1(nπL)β1)2Eα1,2α1+1(κ1(nπL)β1tα1)fn|=O(tα1+ε),for t0 (2.15)

    with small ε>0. Similarly, we can verify that

    |t2α2n=1(κ2(nπL)β2)2Eα2,2α2+1(κ2(nπL)β2tα2)fn|=O(tα2+ε),for t0 (2.16)

    with small ε>0.

    Inserting the above two equalities into (2.7), we have

    tα1Γ(α1+1)n=1κ1(nπL)β1fn+O(tα1+ε)=tα2Γ(α2+1)n=1κ2(nπL)β2fn+O(tα2+ε). (2.17)

    If α2<α1, then

    tα1α2Γ(α1+1)n=1κ1(nπL)β1fn+O(tα1α2+ε)=1Γ(α2+1)n=1κ2(nπL)β2fn+O(tε),

    holds for t0. That means

    1Γ(α2+1)n=1κ2(nπL)β2fn0. (2.18)

    for t0. However, this is impossible. Because if n=1κ2(nπL)β2fn=0, then due to fn>0,1<β22,

    0=n=1κ2(nπL)β2fn=n=1κ2(nπL)2(nπL)β22fn(πL)β22n=1κ2(nπL)2fn=κ2(πL)β22f(0)<0. (2.19)

    This is a contradiction. Hence α2<α1 does not hold. Similarly, α2>α1 does not hold, too. As a result, we get α1=α2.

    Next, we will prove that β1=β2,κ1=κ2.

    Now we have the following equation from (2.4)

    n=1Eα,1(κ1(nπL)β1tα)fn=n=1Eα,1(κ2(nπL)β2tα)fn, (2.20)

    for α1=α2=α. By the analytic proposition of Eα,1(t), the above equation holds for t>0. First for 0<α<1, we have the Laplace transform of Mittag-Leffler function:

    0eztEα,1(κ(nπL)βtα)dt=zα1zα+κ(nπL)β,z>0. (2.21)

    Taking Laplace transform on both sides of Eq (2.20), we have

    n=1fnzα1zα+κ1(nπL)β1=n=1fnzα1zα+κ2(nπL)β2,z>0, (2.22)

    i.e.,

    n=1fnzα+κ1(nπL)β1=n=1fnzα+κ2(nπL)β2,z>0. (2.23)

    That is

    n=1fnη+κ1(nπL)β1=n=1fnη+κ2(nπL)β2,η>0. (2.24)

    We can analytically continue both sides of (2.24) in η. So (2.24) holds for

    ηC/({κ1(nπL)β1}n1{κ2(nπL)β2}n1).

    Now we deduce that κ1(nπL)β1=κ2(nπL)β2. From (2.24), W. L. O. G., we assume that κ1(nπL)β1<κ2(nπL)β2. Then we can take a suitable disc R1 which includes κ1(nπL)β1|n=1 but does not include {κ1(nπL)β1}n2{κ2(nπL)β2}n1. According to (2.24) and the analyticity of both sides of (2.24), we have

    n=1R1fnη+κ1(nπL)β1dη=n=1R1fnη+κ2(nπL)β2dη, (2.25)

    and hence, the Cauchy integral formula and Cauchy integral theorem yield

    2πif1=0. (2.26)

    However, f10, i.e., 2πif10. Therefore κ1(nπL)β1<κ2(nπL)β2 does not hold.

    By the same argument, κ1(nπL)β1>κ2(nπL)β2 does not hold, either. Therefore, there holds κ1(nπL)β1=κ2(nπL)β2. Now, we are in the position to prove κ1=κ2, β1=β2 from κ1(nπL)β1=κ2(nπL)β2. It is easy to see that if κ1κ2 or β1β2 then there must exist at least a constant n such that κ1(nπL)β1κ2(nπL)β2. By analysis of its contrapositivity, we easily have κ1=κ2, β1=β2.

    In general, the conditions (2.2) is not easy for one to verify. Therefore we give the weak conditions for the uniqueness:

    Theorem 2. Suppose that u1(α1,β1,κ1;x,t) and u2(α2,β2,κ2;x,t) represent the solutions of the inverse problem with α=α1,β=β1,κ=κ1 and α=α2,β=β2,κ=κ2 respectively. We assume that the initial data satisfies

    fH2(0,L),f(0)=f(L)=0(Comapatiblecondition), (2.27)
    ()βk/2f(0)0,k=1,2. (2.28)

    If u1(α1,β1,κ1;0,t)=u2(α2,β2,κ2;0,t) for 0<t<T, then

    α1=α2,β1=β2,κ1=κ2.

    Proof. W. L. O. G, let L=1,φn(x)=2cosnπx for nN and φ0(x)=1. Then we consider

    {tDαu(x,t)=κ()β/2u(x,t),0<x<1,0<t<T,ux(0,t)=ux(1,t),t>0,u(x,0)=f(x),0<x<1.

    Then

    u(x,t)=n=0Eα,1(k(nπ)βtα)(f,φn)φn(x).

    Setting fn=(f,φn)2 for nN and f0=(f,1), we have

    u(0,t)=f0+n=1Eα,1(k(nπ)βtα)fn.

    Assume

    uα1,β1,k1(0,t)=uα2,β2,k2(0,t),t>0.

    Then due to Podlubny [19] (p.34),

    tα1Γ(α1+1)n=1k1(nπ)β1(f,φn)φn(0)+O(t2α1)=tα2Γ(α2+1)n=1k2(nπ)β2(f,φn)φn(0)+O(t2α2),ast0.

    Since fD(()βk/2) and ()βk/2 is self-adjoint for k=1,2, we have

    n=1(nπ)β1(f,φn)φn(0)=n=0(nπ)β1(f,φn)φn(0)=n=0(f,(nπ)β1φn)φn(0)=n=0(f,()β1/2φn)φn(0)=n=0(()β1/2f,φn)φn(0)=()β1/2f(0).

    Hence we have

    κ1tα1Γ(α1+1)()β1/2f(0)+O(t2α1)=κ2tα2Γ(α2+1)()β2/2f(0)+O(t2α2). (2.29)

    Now we can prove that α1=α2.

    Indeed we assume that α1<α2. Dividing (2.29) by tα1, we obtain

    k1Γ(α1+1)()β1/2f(0)+O(tα1)=k2Γ(α2+1)()β2/2f(0)tα2α1+O(t2α2α1).

    By α1<α2, letting t0, we obtain k1Γ(α1+1)()β1/2f(0)=0, which contradicts (2.29).

    The uniqueness proof on β and κ is the same as that in Theorem 1.

    Remark on (2.29). We can satisfy (2.29) by a generous condition by the comparison principle of ()β/2.

    In this section, we propose two numerical methods for solving this problem based least squares functional. The first is based on Tikhonov method in the function space. The second method is based on the classical Levenberg-Marquardt optimization method in the discrete Euclid space.

    Denote a=(αβκ)R3, let u(x,t;a):=u(α,β,κ)(x,t) be the unique solution of forward problem. A feasible way to numerical computation for the unknown a is to solve the following minimization problem.

    minaR3J(a):=minaR312(u(a;0,t)gδ(t)2L2(0,T)+λa2R3). (3.1)

    The gradient J(a) of the functional J(a) is given by

    [T0u(α,β,κ,0,t)α(u(α,β,κ,0,t)gδ(t))dt+λαT0u(α,β,κ,0,t)β(u(α,β,κ,0,t)gδ(t))dt+λβT0u(α,β,κ,0,t)κ(u(α,β,κ,0,t)gδ(t))dt+λκ].

    If let J(a) = 0, then we can get the Euler equation for the minimizer. However, it is a nonlinear equation and is not easily be solved directly. Here we turn to the approximate solution by the iterative method. Using the gradient flow method with an initial value a0, we get

    dadt=J(a), (3.2)

    where t is the artificial time. Using a simple method, i.e. the explicit Euler method, we arrives the following iteration schemes with a time step size τ:

    aj+1=ajτJ(aj), (3.3)

    i.e.,

    αj+1=αjτα(T0u(α,β,κ,0,t)α|α=αj,β=βj,κ=κj(u(αj,βj,κj,0,t)gδ(t))dt+λαj);βj+1=βjτβ(T0u(α,β,κ,0,t)β|α=αj,β=βj,κ=κj(u(αj,βj,κj,0,t)gδ(t))dt+λβj);κj+1=κjτκ(T0u(α,β,κ,0,t)κ|α=αj,β=βj,κ=κj(u(αj,βj,κj,0,t)gδ(t))dt+λκj). (3.4)

    Because in most of the practical applications, the data are measured at discrete times. Assume the measured data is given by gδ(ti),i=0,1,,q. Let us consider the minimization problem in discrete case:

    u(a;0,ti)gδ(ti)2Rq,

    where u(a;0,ti) is the computed data from the forward problem with a given a, which is used to fit the measured data. A standard method for solving this least squares problem is the Levenberg-Marquardt method with a damped parameter ˜λ which plays the same role as the regularization parameter λ in Tikhonov method. For readability, we give the details of this algorithm:

    A updated sequences is given by

    aj+1=aj+Δaj,j=1,2,, (3.5)

    where Δaj is the updated stepsize of aj in each iteration step j. We consider the the minimization problem about Δaj at each iteration step j:

    F(Δaj):=u(aj+Δaj;0,ti)gδ(ti)2Rq. (3.6)

    Make the Taylor expansion for u(aj+Δaj;0,ti) at aj and take a linear approximation, we have

    u(aj+Δaj;0,ti)u(aj;0,ti)+trau(a;0,ti)aj.

    Plus this into (3.6), we get

    F(Δaj):=trau(a;0,ti)aj(gδ(ti)u(aj;0,ti))2Rq. (3.7)

    However, this least square problem is ill-posed due to the original problem, therefore we consider the Tikhonov method:

    F(Δaj):=trau(a;0,ti)aj(gδ(ti)u(aj;0,ti))2Rq+˜λΔajR3, (3.8)

    where trau(a;0,ti)aj is computed by finite difference method and is given by trau(a;0,ti)aj3k=1u(ajk+h;0,ti)u(ajk;0,ti)hΔajk. Now the minimization problem (3.8) is a linear problem and can be easily solved for the updated stepsize Δaj with a regularization ˜λ.

    In this section, we consider a simple example to show the effectiveness of the aforementioned two algorithms, i.e., Tikhonov method and Levenberg-Marquardt method. We want to determine the parameters (α,β,κ) in the following problem

    tDαu(x,t)=κ()β2u(x,t),0tT,0xπ, (3.9)
    ux(0,t)=ux(π,t)=0, (3.10)
    u(x,0):=f(x)=x2(3π/2x). (3.11)

    The exact solution is given by

    u(x,t)=π34+n=1Eα,1(κnβtα)[12(1+(1)n+1)πn4]cos(nx). (3.12)

    Now the input data g(t):=u(0,t) is obtained and the noisy data gδ(t) is generated in the following way:

    gδ(t)=g(t)+σ(t), (3.13)

    where σ(t)=θr(t) and r(t) is a random number between [0,1] and θ=max{g(t)}η% is the noise level. In the numerical experiment, we fix the parameters T=1,η=1. The algorithm of calculating Mittag effler function is given in [20]. First we plot the solution u(x,t=1) of direct problem when α=0.5,β=1.2,κ=0.10, which is shown in Figure 1. The input exact data g(t) is displayed in Figure 2.

    Figure 1.  The solution u(x,t) at the time t=1 for the direct problem.

    The exact fractional orders and diffusion coefficient are α=0.5,β=1.2,κ=0.10.

    Figure 2.  The input data for reconstruction.

    In the numerical test for the Tikhonov method, the parameters and their values in the computation are listed below:

    1. Summing idex of u(x,t) in (3.12): n=20 is taken.

    2. The number of ti=i/q[0,1],(i=0,,q) and q=20 in two methods (q is the total points for trapezoidal rule of the numerical integral in (3.4)).

    3. Stepsizes τs(s=α,β,κ) in (3.4) for coputing the derivatives uα, uβ and uκ by finite difference method: τα=τβ=τκ=0.01.

    4. The initial guess for (α,β,κ)=(0.4,1.0,0.05).

    5. The regularization parameter λ=0.001.

    6. The iterative step size is τ=(0.07,0.01,0.02).

    6. Stop criterion: when aj+1aj0.01.

    Finally we obtain the approximate value (α,β,κ)=(0.5289,1.1836,0.0943). Keep the same parameters with the damped parameter ˜λ=0.001, we use the Levenberg-Marquardt method to get the approximate value where we don't use the Matlab optimization toolbox on Levenberg-Marquardt method.

    (α,β,κ)=(0.4933,1.1968,0.0998).

    This result shows the numerical methods are effective. Here we list more results using the above parameters for the Levenberg-Marquardt method. First we fix the initial guess for α=0.1,κ=0.05 and let β range from 1 to 1.4. The numerical results are displayed in Table 1.

    Table 1.  Numerical results for different initial guesses with fixed α=0.1,κ=0.05.
    β Approximation for (α,β,κ)
    1 (0.4992;1.2012;0.1001)
    1.1 (0.5015;1.2016;0.1002)
    1.2 (0.49995;1.2014;0.1002)
    1.3 (0.5011;1.19999;0.09953)
    1.4 (0.4981;1.202;0.09999)

     | Show Table
    DownLoad: CSV

    The numerical results show that the methods are stable.

    In this paper, we give the proof of uniqueness for determining three parameters in a time-space fractional diffusion equation by means of observation data from accessible boundary. By our uniqueness result a Tikhonov method and the Levenberg-Marquardt method are tested preliminarily. Some further research on the stability of the proposed methods will be investigated in the future.

    The authors are grateful to the anonymous referees for their insightful comments and suggestions which contributed to greatly improve the original version of the manuscript. The authors would like to thank for the Prof. Masahiro Yamamoto's important comments on Theorem 2.

    This work is partially supported by the Natural Science Foundation of China (No. 11661072), the Natural Science Foundation of Northwest Normal University, China (No. NWNU-LKQN-17-5).

    There is no conflict of interest.



    [1] P. Sledzińska, M. Bebyn, J. Furtak, A. Koper, K. Koper, Current and promising treatment strategies in glioma, Rev. Neurosci., 34 (2022), 483–516. https://doi.org/10.1515/revneuro-2022-0060 doi: 10.1515/revneuro-2022-0060
    [2] Y. Iwatate, I. Hoshino, H. Yokota, F. Ishige, M. Itami, Y. Mori, et al., Radiogenomics for predicting p53 status, PD-L1 expression, and prognosis with machine learning in pancreatic cancer, Br. J. Cancer, 123 (2020), 1253–1261. https://doi.org/10.1038/s41416-020-0997-1 doi: 10.1038/s41416-020-0997-1
    [3] X. Sun, P. Pang, L. Lou, Q. Feng, Z. Ding, J. Zhou, Radiomic prediction models for the level of Ki-67 and p53 in glioma, J. Int. Med. Res., 48 (2020). https://doi.org/10.1177/0300060520914466 doi: 10.1177/0300060520914466
    [4] I. Ezawa, Y. Sawai, T. Kawase, A. Okabe, S. Tsutsumi, H. Ichikawa, et al., Novel p53 target gene FUCA1 encodes a fucosidase and regulates growth and survival of cancer cells, Cancer Sci., 107 (2016), 734–745. https://doi.org/10.1111/cas.12933 doi: 10.1111/cas.12933
    [5] D. N. Louis, P. Arie, W. Pieter, D. J. Brat, I. A. Cree, D. Figarella-Branger, et al., The 2021 WHO Classification of Tumors of the Central Nervous System: A summary, Neuro-Oncol., 23 (2021), 1231–1251. https://doi.org/10.1093/neuonc/noab106 doi: 10.1093/neuonc/noab106
    [6] K. Charnpreet, G. Urvashi, Artificial intelligence techniques for cancer detection in medical image processing: A review, Mater. Today Proc., 81 (2023), 806–809. https://doi.org/10.1016/j.matpr.2021.04.241 doi: 10.1016/j.matpr.2021.04.241
    [7] C. M. Moon, Y. Y. Lee, D. Y. Kim, W. Yoon, B. H. Baek, J. H. Park, et al., Preoperative prediction of Ki-67 and p53 status in meningioma using a multiparametric MRI-based clinical-radiomic model, Front. Oncol., 13 (2023), 1138069. https://doi.org/10.3389/fonc.2023.1138069 doi: 10.3389/fonc.2023.1138069
    [8] J. J. Jiang, L. M. Guan, Y. Guo, K. Xu, A preliminary study on the predictive efficacy of conventional T2WI-based radiogenomics model for glioma p53 status, Chin. J. Clin. Med. Imaging, 32 (2021), 609–612.
    [9] I. T. Ashwini, J. T. Senders, S. Kremer, S. Devi, W. B. Gormley, O. Arnaout, et al., Survival prediction of glioblastoma patients-are we there yet? A systematic review of prognostic modeling for glioblastoma and its clinical potential, Neurosurgical Rev., 44 (2020), 2047–2057. https://doi.org/10.1007/s10143-020-01430-z doi: 10.1007/s10143-020-01430-z
    [10] B. Zhang, S. Qi, X. Pan, C. Li, Y. Yao, W. Qian, et al., Deep CNN model using CT radiomics feature mapping recognizes EGFR gene mutation status of lung adenocarcinoma, Front. Oncol., 10 (2021), 598721. https://doi.org/10.3389/fonc.2020.598721 doi: 10.3389/fonc.2020.598721
    [11] T. Noguchi, T. Ando, S. Emoto, H. Nozawa, K. Kawai, K. Sasaki, et al., Artificial intelligence program to predict p53 mutations in ulcerative colitis-associated cancer or dysplasia, Inflammatory Bowel Dis., 28 (2022), 1072–1080. https://doi.org/10.1093/ibd/izab350 doi: 10.1093/ibd/izab350
    [12] Y. S. Choi, S. Bae, J. H. Chang, S. G. Kang, S. H. Kim, J. Kim, et al., Fully automated hybrid approach to predict the IDH mutation status of gliomas via deep learning and radiomics, Neuro-Oncol., 23 (2021), 304–313. https://doi.org/10.1093/neuonc/noaa177 doi: 10.1093/neuonc/noaa177
    [13] Q. Xu, Q. Q. Xu, N. Shi, L. N. Dong, H. Zhu, K. Xu, A multitask classification framework based on vision transformer for predicting molecular expressions of glioma, Eur. J. Radiol., 157 (2022), 110560. https://doi.org/10.1016/j.ejrad.2022.110560 doi: 10.1016/j.ejrad.2022.110560
    [14] G. Madhuri, S. M. Kumar, O. Aparajita, GeneViT: Gene vision transformer with improved deepinsight for cancer classification, Comput. Biol. Med., 155 (2023), 106643. https://doi.org/10.1016/j.compbiomed.2023.106643 doi: 10.1016/j.compbiomed.2023.106643
    [15] C. Ma, Z. Huang, J. Xian, M. Gao, J. Xu, Improving uncertainty calibration of deep neural networks via truth discovery and geometric optimization, in Uncertainty in Artificial Intelligence, PMLR, (2021), 75–85. https://doi.org/10.48550/arXiv.2106.14662
    [16] L. R. Soenksen, T. Kassis, S. T. Conover, B. Marti-Fuster, J. S. Birkenfeld, J. Tucker-Schwartz, et al., Using deep learning for dermatologist-level detection of suspicious pigmented skin lesions from wide-field images, Sci. Transl. Med., 13 (2021), eabb3652. https://doi.org/10.1126/scitranslmed.abb3652 doi: 10.1126/scitranslmed.abb3652
    [17] I. Radosavovic, R. P. Kosaraju, R. Girshick, K. He, P. Dollár, Designing network design spaces, in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 10428–10436.
    [18] Q. Q. Xu, Q. Xu, H. C. Xu, Y. L. Zhao, K. Xu, H. Zhu, Intelligent prediction of glioma IDH1 mutation status based on CnViT, J. Shandong Univ. Eng. Ed., 53 (2023), 127–134.
    [19] S. Jiang, G. J. Zanazzi, S. Hassanpour, Predicting prognosis and IDH mutation status for patients with lower-grade gliomas using whole slide images, Sci. Rep., 11 (2021), 16849. https://doi.org/s41598-021-95948-x
    [20] Y. S. Choi, S. Bae, J. H. Chang, S. G. Kang, S. H. Kim, J. Kim, et al., Fully automated hybrid approach to predict the IDH mutation status of gliomas via deep learning and radiomics, Neuro-Oncol., 23 (2020), 304–313. https://doi.org/10.1093/neuonc/noaa177 doi: 10.1093/neuonc/noaa177
    [21] M. B. Taha, M. T. Li, D. Boley, C. C. Chen, J. Sun, Detection of isocitrate dehydrogenase mutated glioblastomas through anomaly detection analytics, Neurosurgery, 89 (2021), 323–328. https://doi.org/10.1093/neuros/nyab130 doi: 10.1093/neuros/nyab130
    [22] R. K. Kawaguchi, M. Takahashi, M. Miyake, M. Kinoshita, S. Takahashi, K. Ichimura, et al., Assessing versatile machine learning models for glioma radiogenomic studies across hospitals, Cancers, 13 (2021), 3611. https://doi.org/10.3390/cancers13143611 doi: 10.3390/cancers13143611
    [23] S. Xie, R. Girshick, P. Dollár, Z. Tu, K. He, Aggregated residual transformations for deep neural networks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2017), 1492–1500. https://doi.org/10.1109/CVPR.2017.634
    [24] S. Tummala, S. Kadry, S. A. C. Bukhari, H. T. Rauf, Classification of brain tumor from magnetic resonance imaging using vision transformers ensembling, Curr. Oncol., 29 (2020), 7498–7511. https://doi.org/10.3390/curroncol29100590 doi: 10.3390/curroncol29100590
    [25] L. B. Ammar, K. Gasmi, I. B. Ltaifa, ViT-TB: ensemble learning based ViT model for tuberculosis recognition, Cybern. Syst., 55 (2022), 634–653. https://doi.org/10.1080/01969722.2022.2162736 doi: 10.1080/01969722.2022.2162736
    [26] C. Guo, G. Pleiss, Y. Sun, K. Q. Weinberger, On calibration of modern neural networks, in International Conference on Machine Learning, (2017), 1321–1330. https://doi.org/10.48550/arXiv.1706.04599
    [27] B. Murugesan, B. Liu, A. Galdran, I. B. Ayed, J. Dolz, Calibrating segmentation networks with margin-based label smoothing, Med. Image Anal., 87 (2023), 102826. https://doi.org/10.1016/j.media.2023.102826 doi: 10.1016/j.media.2023.102826
    [28] T. Buddenkotte, L. E. Sanchez, M. Crispin-Ortuzar, R. Woitek, C. McCague, J. D. Brenton, et al., Calibrating ensembles for scalable uncertainty quantification in deep learning-based medical image segmentation, Comput. Biol. Med., 163 (2023), 107096. https://doi.org/10.1016/j.compbiomed.2023.107096 doi: 10.1016/j.compbiomed.2023.107096
    [29] C. Kevin, E. Ralph, Improved calibration of building models using approximate Bayesian calibration and neural networks, J. Build. Perform. Simul., 16 (2023), 291–307. https://doi.org/10.1080/19401493.2022.2137236 doi: 10.1080/19401493.2022.2137236
    [30] H. Xu, H. Zhang, Q. Li, T. Qin, Z. Zhang, A data-semantic-conflict-based multi-truth discovery algorithm for a programming site, Comput. Mater. Continuum., 68 (2021), 2681–2691. https://doi.org/10.32604/cmc.2021.016188 doi: 10.32604/cmc.2021.016188
    [31] H. Ding, J. Xu, Learning the truth vector in high dimensions, J. Comput. Syst. Sci., 109 (2020), 78–94. https://doi.org/10.1016/j.jcss.2019.12.002 doi: 10.1016/j.jcss.2019.12.002
    [32] J. J. Cao, C. Chang, N. F. Weng, J. Q. Tao, C. Jiang, Truth value discovery based on neural network coding, J. Comput. Syst. Sci., 43 (2021). https://doi.org/10.3969/j.issn.1007-130X.2021.09.004 doi: 10.3969/j.issn.1007-130X.2021.09.004
    [33] A. Kumar, P. Liang, T. Ma, Verified uncertainty calibration, Adv. Neural Inf. Process. Syst., 32 (2019). https://doi.org/10.48550/arXiv.1909.10155 doi: 10.48550/arXiv.1909.10155
    [34] J. Z. Liu, Z. Lin, S. Padhy, D. Tran, T. Bedrax-Weiss, B. Lakshminarayanan, Simple and principled uncertainty estimation with deterministic deep learning via distance awareness, Adv. Neural Inf. Process. Syst., 33 (2020), 7498–7512. https://doi.org/10.48550/arXiv.2006.10108 doi: 10.48550/arXiv.2006.10108
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1058) PDF downloads(53) Cited by(0)

Figures and Tables

Figures(5)  /  Tables(5)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog