Loading [MathJax]/jax/output/SVG/jax.js
Research article

Weighted pseudo almost periodic solutions of octonion-valued neural networks with mixed time-varying delays and leakage delays

  • Received: 22 February 2023 Revised: 27 March 2023 Accepted: 05 April 2023 Published: 21 April 2023
  • MSC : 34A34, 34C25, 34D23, 34K20

  • In this paper, we propose a class of octonion-valued neural networks with leakage delays and mixed delays. Considering that the multiplication of octonion algebras does not satisfy the associativity and commutativity, we can obtain the existence and global exponential stability of weighted pseudo almost periodic solutions for octonion-valued neural networks with leakage delays and mixed delays by using the Banach fixed point theorem, the proof by contradiction and the non-decomposition method. Finally, we will give one example to illustrate the feasibility and effectiveness of the main results.

    Citation: Jin Gao, Lihua Dai. Weighted pseudo almost periodic solutions of octonion-valued neural networks with mixed time-varying delays and leakage delays[J]. AIMS Mathematics, 2023, 8(6): 14867-14893. doi: 10.3934/math.2023760

    Related Papers:

    [1] Hamed Faraji, Shahroud Azami, Ghodratallah Fasihi-Ramandi . h-Almost Ricci solitons with concurrent potential fields. AIMS Mathematics, 2020, 5(5): 4220-4228. doi: 10.3934/math.2020269
    [2] Adara M. Blaga, Sharief Deshmukh . Einstein solitons with unit geodesic potential vector field. AIMS Mathematics, 2021, 6(8): 7961-7970. doi: 10.3934/math.2021462
    [3] Yanlin Li, Mohd Danish Siddiqi, Meraj Ali Khan, Ibrahim Al-Dayel, Maged Zakaria Youssef . Solitonic effect on relativistic string cloud spacetime attached with strange quark matter. AIMS Mathematics, 2024, 9(6): 14487-14503. doi: 10.3934/math.2024704
    [4] Mohd. Danish Siddiqi, Fatemah Mofarreh . Hyperbolic Ricci soliton and gradient hyperbolic Ricci soliton on relativistic prefect fluid spacetime. AIMS Mathematics, 2024, 9(8): 21628-21640. doi: 10.3934/math.20241051
    [5] Shahroud Azami, Mehdi Jafari, Nargis Jamal, Abdul Haseeb . Hyperbolic Ricci solitons on perfect fluid spacetimes. AIMS Mathematics, 2024, 9(7): 18929-18943. doi: 10.3934/math.2024921
    [6] Shahroud Azami, Rawan Bossly, Abdul Haseeb . Riemann solitons on Egorov and Cahen-Wallach symmetric spaces. AIMS Mathematics, 2025, 10(1): 1882-1899. doi: 10.3934/math.2025087
    [7] Yusuf Dogru . η-Ricci-Bourguignon solitons with a semi-symmetric metric and semi-symmetric non-metric connection. AIMS Mathematics, 2023, 8(5): 11943-11952. doi: 10.3934/math.2023603
    [8] Abdul Haseeb, Fatemah Mofarreh, Sudhakar Kumar Chaubey, Rajendra Prasad . A study of -Ricci–Yamabe solitons on LP-Kenmotsu manifolds. AIMS Mathematics, 2024, 9(8): 22532-22546. doi: 10.3934/math.20241096
    [9] Amira Ishan . On concurrent vector fields on Riemannian manifolds. AIMS Mathematics, 2023, 8(10): 25097-25103. doi: 10.3934/math.20231281
    [10] Yanlin Li, Dipen Ganguly, Santu Dey, Arindam Bhattacharyya . Conformal η-Ricci solitons within the framework of indefinite Kenmotsu manifolds. AIMS Mathematics, 2022, 7(4): 5408-5430. doi: 10.3934/math.2022300
  • In this paper, we propose a class of octonion-valued neural networks with leakage delays and mixed delays. Considering that the multiplication of octonion algebras does not satisfy the associativity and commutativity, we can obtain the existence and global exponential stability of weighted pseudo almost periodic solutions for octonion-valued neural networks with leakage delays and mixed delays by using the Banach fixed point theorem, the proof by contradiction and the non-decomposition method. Finally, we will give one example to illustrate the feasibility and effectiveness of the main results.



    Given an open bounded domain ΩRd which has a smooth boundary Γ, and a positive real number T. We consider the non-linear hyperbolic partial different equation with the strong damping αΔ2ut, as follows

    utt+αΔ2ut+βΔ2u=F(x,t,u),(x,t)Ω×(0,T), (1.1)

    associated with the final value functions

    u(x,T)=ρ(x),ut(x,T)=ξ(x),xΩ, (1.2)

    and the Dirichlet boundary condition

    u(x,t)=0,(x,t)Γ×(0,T), (1.3)

    where α,β are positive constants, and the source F(x,t,u) is a given function of the variable u.

    As we all know, the amplitude of a wave is related to the amount of energy it carries. A high amplitude wave carries a large amount of energy and vice versa. A wave propagates through a certain environment, its energy will decrease as time goes on, so wave amplitude also decreases (called damped wave). The damped wave equations are widely used in science and engineering, especially in physics. They can describe how waves propagate. It applies to all kinds of waves, from water waves [8] to sound and vibrations [13,21], and even light and radio waves [10].

    Let us briefly describe some previous results related to the Problem (1.1). In recent years, much attention has been paid to the study on the properties and asymptotic behavior of the solution on Problem (1.1) subject to the initial conditions u(x,0)=ρ(x), ut(x,0)=ξ(x) (pioneering works [1,2,5,9,15]). However, to the best of our knowledge, there are not any result on backward problem (1.1)–(1.3).

    In practice we usually do not have these final value functions, instead they are suggested from the experience of the researcher. A more reliable way is to use their observed values. However, we all know that observations always come with random errors, these errors are derived from the ability of the measuring device (measurement error). It is therefore natural that observations are observed usually in the presence of some noise. In this paper, we will consider the case where these perturbation are an additive stochastic white noise

    ρϵ(x)=ρ(x)+ϵW(x),ξϵ(x)=ξ(x)+ϵW(x), (1.4)

    where ϵ is the amplitude of the noise and W(x) is a Gaussian white noise process. Suppose further that even the observations (1.4) cannot be observed exactly, but they can only be observed in discretized form

    ρϵ,φp=ρ,φp+ϵW,φp,ξϵ,φp=ξ,φp+ϵW,φp,p=1,,N, (1.5)

    where {φp} is a orthonormal basic of Hilbert space H; , denotes the inner product in H; Wp:=W,φp are standard normal distribution; and ρϵ,φp are independent random variables for orthonormal functions φp. For more detail on the white noise model see, [3,11,12].

    It is well-known that Problem (1.1)–(1.4) is ill-posed in the sense of Hadamard (if the solution exists, then it does not depend continuously on the final values), and regularization methods for it are required. The aim of this paper is to recover the unknown final value functions ρ, ξ from indirect and noisy discrete observations (1.5) and then we use them to establish a regularized solution by the Fourier truncation method. To the best of our knowledge, the present paper may be the fist study for ill-posed problem for hyperbolic equations with Gaussian white noise. We have learned more ideas from these articles[14,17,18,20], but the detailed technique is different.

    The organizational structure of this paper is as follows. Section 2 introduces some preliminary materials. Section 3 uses the Fourier series to obtain the mild solution and analyse the ill-posedness of problem. Section 4 presents an example of an ill-posed problem with random noise. In Section 5, we draw into main results: first we propose a new regularized solution, and then we give the convergent estimates between a mild solution and a regularized solution under some priori assumptions on the exact solution. To end this section, we discuss a regularization parameter choice rule. Finally, Section 6 reports numerical implementations to support our theoretical results and to show the validity of the proposed reconstruction method.

    Throughout this paper, let us denote the Hilbert space H:=L2(Ω), and , is the inner product of H. Since Ω is the bounded open set, there exists a Hilbert orthonormal basic {φp}p=1 in H (φpH10(Ω)C(Ω)) and a sequence {λp}p=1 of real, 0λ1λ2limpλp=+, such that Δφp(x)=λpφp(x) for xΩ and φp(x)=0 for xΩ. We say that λp are the eigenvalues of of Δ and φp are the associated eigenfunctions. The Sobolev class of function is defined as follows

    Hμ={fH:p=1λμpf,φp2<}.

    It is a Hilbert space endowed with the norm f2Hμ=p=1λμpf,φp2. For τ,ν>0, following [4,6], we introduce the special Gevrey classes of functions

    Gσ,ν={fH:p=1eσλpλνpf,ep2<+}.

    We remark that Gσ,ν is also the Hilbert space endowed with the norm f2Gσ,ν=p=1eσλpλνpf,ei2.

    Definition 2.1 (Bochner space [22]). Given a probability measure space (˜Ω,M,μ), a Hilbert space H. The Bochner space L2(˜Ω,H)L2((˜Ω,M,μ);H) is defined to be the functions u:˜ΩH such that the corresponding norm is finite

    uL2(˜Ω,H):=(˜Ωu(ω)2Hdμ(ω))1/2=(Eu2H)1/2<+. (2.1)

    Definition 2.2 (Reconstruction of the final value functions). Given ρ,ξHμ (μ>0), which have sequences of n (is known as sample size) discrete observations ρϵ,φp and ξϵ,φp, p=1,,n. Non-parametric estimation of ρ and ξ are suggested as

    ˜ρn(x)=np=1ρϵ,φpφp(x),˜ξϵn(x)=np=1ξϵ,φpφp(x). (2.2)

    Lemma 2.1. Given ρ,ξHμ (μ>0), then the estimation errors are

    E||˜ρnρ||2Hϵ2n+1λμn||ρ||2Hμ,E˜ξNξ2Hϵ2n+1λμn||ξ||2Hμ. (2.3)

    Here n(ϵ):=n depends on ϵ and satisfies that limϵ0+n(ϵ)=+.

    Proof. Our proof starts with the observation that

    E||˜ρnρ||H=E(np=1ρϵρ,φp2)+p=n+1ρ,φp2=ϵE(np=1W2p)+p=n+1λμpλμpρ,φp2ϵE(np=1W2p)+1λμnp=n+1λμpρ,φp2.

    The assumption Wp=W,φpiidN(0,1) implies that EW2p=1. We then have the desired the first result. The same conclusion can be drawn for the remaining case.

    Taking the inner product on both side of (1.1) and (1.2) with φp, and set up(t)=u(,t),φp, ρp(t)=ρ,φp, ξp(t)=ξ,φp, and Fp(u)=F(,t,u(,t)),φp, then

    {up(t)+αλ2pup(t)+βλ2pup=Fp(u)(t),up(T)=ρp,up(T)=ξp. (3.1)

    In this work we assume that Δp:=α2λ4p4βλ2p>0 then a quadratic equation k2αλ2pk+βλ2p=0 has two different solutions kp=αλ2pΔp2,k+p=αλ2p+Δp2. Multiplying both sides the first equation of System (3.1) by ϕp(τ)=e(τt)k+pe(τt)kpΔp, and integrating both sides from t to T,

    Ttϕp(τ)up(τ)dτ+αλ2pTtϕp(τ)up(τ)dτ+βλ2pTtϕp(τ)updτ=Ttϕp(τ)Fp(u)(τ)dτ. (3.2)

    The left hand side of (3.2) now becomes

    [ϕp(τ)up(τ)ϕp(τ)up(τ)+αλ2pϕp(τ)up(τ)]Tt+Tt[ϕp(τ)αλ2pϕp(τ)+βλ2pϕp(τ)]up(τ)dτ.

    Since kp, k+p satisfy the equation k2αλ2pk+βλ2p=0, then ϕ(τ)αλ2pϕp(τ)+βλ2p=0. Hence, (3.2) becomes

    [ϕp(τ)up(τ)ϕp(τ)up(τ)+αλ2pϕp(τ)up(τ)]Tt=Ttϕp(τ)Fp(u)(τ)dτ. (3.3)

    It is worth noticing that ϕp(t)=0, ϕp(t)=1 and ϕp(T)+αλ2pϕp(T)=kpe(Tt)k+pk+pe(Tt)kpΔp. Therefore, (3.3) now becomes

    up(t)=k+pe(Tt)kpkpe(Tt)k+pΔpρpe(Tt)k+pe(Tt)kpΔpξp+Ttk+pe(τt)kpkpe(τt)k+pΔpFp(u)(τ)dτ.

    Lemma 3.1. Let ρ,ξH. Suppose that the given problem (1.1)(1.3) has a solution uC([0,T],H), then the mild solution is represented in terms of the Fourier series as follows

    u(x,t)=R(Tt)ρ(x)S(Tt)ξ(x)+TtS(τt)F(x,τ,u)dτ, (3.4)

    where the operators R(t)f and S(t)f are

    R(t)f=p=1(k+petkpkpetk+pΔpf,φp)φp(x);S(t)f=p=1(etk+petkpΔpf,φp)φp(x). (3.5)

    In this section, we present an example of Problem (1.1)–(1.3) with random noise (1.4) which is ill-posed in the sense of Hadamard (does not depend continuously on the final data). We consider the particular case as follows

    {˜untt+αΔ2˜unt+βΔ2˜un=F(˜un),(x,t)Ω×(0,T),˜un(x,T)=0,xΩ,˜unt(x,T)=˜ξϵn(x),xΩ,˜un(x,t)=0,(x,t)Γ×(0,T), (4.1)

    where F(˜un)(x,t)=p=1eαλ2pT2T2˜un(,t),φpφp(x). For simple computation, we assume that Ω=(0,π). It immediately follows that λp=p2. We assume further that the function ξ(x)=0 (unknown) has observations ξϵ,φp=ϵW,φp, p=1,,n. Then the statistical estimate of ξ(x) is in the form.

    ˜ξϵn(x)=np=1ϵW,φpφp(x). (4.2)

    Using Lemma 3.1, System (4.1) has the mild solution

    ˜un(x,t)=S(Tt)˜ξϵn+TtS(τt)F(˜un)(τ)dτ. (4.3)

    We first show that this nonlinear integral equation has unique solution ˜unL([0,T];L2(˜Ω,H)). Indeed, let us denote

    Φ(u)(x,t)=S(Tt)˜ξϵn+TtS(τt)F(u)(τ)dτ. (4.4)

    Let u1,u2L([0,T];L2(˜Ω,H)). Using the Hölder inequality and Parseval's identity, we obtain

    EΦ(u1)(,t)Φ(u2)(,t)2H=ETtS(τt)(F(u1)(,τ)F(u2)(,τ))dτ2HTETtp=1(e(τt)k+pe(τt)kpΔpΠp(τ)F(v1)(,τ)F(v2)(,τ),φpΠFp(τ))2dτ.

    Since |e(τt)kpe(τt)k+p|(τt)|k+pkp|TΔp and (τt)(k+p+kp)Tαλ2p, then

    |Πp(τ)|=e(τt)(k+p+kp)|e(τt)kpe(τt)k+p|ΔpTeαλ2pT. (4.5)

    From defining the function F as above, it follows that ΠFp(τ)=eαλ2pT2T2u1(,τ)u2(,τ),φp. Thus

    EΦ(u1)(,t)Φ(u2)(,t)2H14TETtp=1u1(,τ)u2(,τ),φp2dτ14u1u22L([0,T];L2(˜Ω,H)).

    Hence, we have that Φ(u1)Φ(u2)2L([0,T];L2(˜Ω,H))14u1u22L([0,T];L2(˜Ω,H)). This means that Φ is a contraction. The Banach fixed point theorem leads to a conclude that Φ(u)=u has a unique solution uL([0,T];L2(˜Ω,H)).

    We then point out that System (4.1) does not depend continuously on the final data. We start by

    E˜un(,t)2HES(Tt)˜ξϵn2H12ETtS(τt)F(˜un)(τ)dτ2H. (4.6)

    It is easy to verify that

    ETtS(τt)F(˜un)(τ)dτ2H14Eu(,t)2H.

    This leads to

    E˜un(,t)2H89ES(Tt)˜ξϵn2H. (4.7)

    It is worth recalling that E˜ξϵn,φp2=ϵ2, so

    ES(Tt)˜ξϵn2H=np=1[e(Tt)k+pe(Tt)kpk+pkp]2E˜ξϵn,φp2[e(Tt)k+ne(Tt)knk+nkn]2ϵ2. (4.8)

    We note that k+nkn=Δp=α2λ4nβλ2n>α2λ41βλ21, then we have

    [e(Tt)k+ne(Tt)knk+nkn]2=e2(Tt)k+n[1e(Tt)(k+nkn)]2(k+nkn)2e2(Tt)k+n[1e(Tt)α2λ414βλ21]2α2λ4n4βλ2n.

    The function h(t)=e(Tt)k+n[1e(Tt)α2λ414βλ21] is a decreasing function with respect to variable t[0,T], so sup0tTh(t)=h(0). This leads to

    sup0tTh2(t)α2λ4n4βλ2n=e2Tk+n[1eTα2λ414βλ21]2α2λ4n4βλ2ne2Tλn[1eTα2λ414βλ21]2α2λ4n4βλ2n. (4.9)

    Combining (4.7)–(4.9) yields

    E˜un(,t)2H89e2Tλn[1eTα2λ414βλ21]2α2λ4n4βλ2nϵ289e2Tn2[1eTα24β]2α2n84βn4ϵ2.

    Let us choose n(ϵ):=n=12Tln(1ϵ3). When ϵ0+, we have E˜ξϵn2H=ϵ2n(ϵ)0. However,

    E˜un2C([0,T];L2(Ω))=891ϵ[1eTα24β]2α2[12Tln(1ϵ3)]44β[12Tln(1ϵ3)]2+.

    Thus, we can conclude that Problem (1.1)–(1.3) with random noise (1.4) which is ill-posed in the sense of Hadamard.

    To come up with a regularized solution, we first denote a truncation operator 1Nf=Np=1f,φpφp(x) for all fH. Now, let us consider a problem as follows

    {˜UNtt+αΔ2˜UNt+βΔ2˜UN=1NF(x,t,˜UN),(x,t)Ω×(0,T),˜UN(x,T)=1N˜ρn(x),xΩ,˜UNt(x,T)=1N˜ξϵn(x),xΩ,˜UN(x,t)=0,(x,t)Γ×(0,T), (5.1)

    where ˜ρn(x), ˜ξϵn(x) as in Definition 2.2 and N, n are called the regularized parameter and the sample size respectively. Applying Lemma 3.1, Problem (5.1) has the mild solution

    ˜UN(x,t)=RN(Tt)˜ρϵn(x)SN(Tt)˜ξϵn(x)+TtSN(τt)F(x,τ,˜UN)dτ, (5.2)

    where

    RN(t)f=Np=1k+petkpkpetk+pΔpf,φpφp(x);SN(t)f=Np=1etk+petkpΔpf,φpφp(x). (5.3)

    The non-linear integral equation is called the regularized solution of Problem (1.1)–(1.3) with the perturbation random model (1.4). And N serves as the regularization parameter.

    Lemma 5.1 ([16,19]). Given fH and t[0,T]. We have the following estimates:

    RN(t)f2HCRe2αtk2Nf2H;SN(t)f2HCSe2αtk2Nf2H. (5.4)

    where CR, CS are constants dependent on α, T.

    Theorem 5.1. Given the functions ρ,ξH. Assume that FC(Ω×[0,T]×R) satisfies the globally Lipschitz property with respect to the third variable i.e., there exists a constant L>0 independent of x,t,u1,u2 such that

    F(,t,u1(,t))F(,t,u2(,t))HLu1(,t)u2(,t)H.

    Then the nonlinearintegral equation (5.2) has a unique solution ˜UNL([0,T],L2(˜Ω;H)).

    Proof. Define the operator P:L([0,T],L2(˜Ω;H))L([0,T],L2(˜Ω;H)) as following

    P(v)(x,t)=RN(Tt)˜ρϵn(x)SN(Tt)˜ξϵn(x)+TtSN(τt)F(x,τ,v)dτ.

    For integer m1, we shall begin with showing that for any v1,v2L([0,T],L2(˜Ω;H))

    EPm(v1)(,t)Pm(v2)(,t)2H[L2CSTe2αTλ2N]m(Tt)mm!v1v22L([0,T],L2(˜Ω;H)). (5.5)

    We now proceed by induction on m. For the base case (m=1),

    EP(v1)(,t)P(v2)(,t)2H=ETtSN(τt)(F(x,τ,v1)F(x,τ,v2))dτ2HTETtCSe2αtλ2NF(,τ,v1)F(,τ,v2)2HdτT(Tt)L2CSe2αTλ2Nv1v22L([0,T],L2(˜Ω;H)),

    where we apply Lemma 5.1 and the Lipschitz condition of F. Thus it is correct for m=1. For the inductive hypothesis, it is true for m=m0. We show that (5.5) is true for m+1.

    EPm+1(v1)(,t)Pm+1(v2)(,t)2H=EP(Pm(v1))(,t)P(Pm(v2))(,t)2H=ETtSN(τt)(F(x,τ,Pm(v1))F(x,τ,Pm(v2)))dτ2HTETtCSe2αtλ2NF(,τ,Pm(v1))F(,τ,Pm(v2))2Hdτ[L2CSTe2αTλ2N]m+1v1v22L([0,T],L2(˜Ω;H))Tt(Tτ)mm!dτ. (5.6)

    From the inductive hypothesis, we have

    EPm+1(v1)(,t)Pm+1(v2)(,t)2H[L2CSTe2αTλ2N]m+1v1v22L([0,T],L2(˜Ω;H))Tt(Tτ)mm!dτ.

    Hence, by the principle of mathematical induction, Formula (5.5) holds. We realize that,

    limm[L2CSTe2αTλ2N]mm!=0,

    and therefore, there will exist a positive number m=m0, such that Pm0 is a contraction. It means that Pm0(˜UN)=˜UN has a unique solution ˜UNL([0,T];L2(˜Ω,H)). This leads to P(Pm0(˜UN))=P(˜UN). Since P(Pm0(˜UN))=Pm0(P(˜UN)), it follows that Pm0(P(˜UN))=P(˜UN). Hence P(˜UN) is a fixed point of Pm0. By the uniqueness of the fixed point of Pm0, we conclude that P(˜UN)=˜UN has a unique solution ˜UNL([0,T];L2(˜Ω,H)).

    Theorem 5.2. Let ρ,ξHμ, (μ>0). Assume that System (1.1)(1.3) has the exact solution uC([0,T];Gσ,2), where σ>2αT. Given ε>0, the following estimate holds

    E˜UN(,t)u(,t)2H2e2αtλ2N(2λ2NuL([0,T];Gσ,2))e2CSL2T(Tt)+2e2α(Tt)λ2N[3CR(ϵ2n+1λμn||ρ||2Hμ)+3CS(ϵ2n+1λμn||ξ||2Hμ)]e3CSL2T(Tt), (5.7)

    where the regularization parameter N(ϵ):=N and the sample size n(ϵ):=n are choosen such that

    limϵ0+N(ϵ)=+,limϵ0+ϵ2n(ϵ)e2αTλ2N(ϵ)=limϵ0+e2αTλ2N(ϵ)λμn(ϵ)=0. (5.8)

    Remark 5.1. The order of convergence of (5.7) is

    e2αtλ2N(ϵ)max{ϵ2n(ϵ)e2αTλ2N(ϵ);e2αTλ2N(ϵ)λμn(ϵ);1λ2N(ϵ)}. (5.9)

    There are many ways to choose the parameters n(ϵ),N(ϵ), that satisfies (5.8). Since λn(ϵ)(n(ϵ))2/d [7], one of the ways we can do by choosing the regularization parameter N(ϵ) such that λN(ϵ) satisfies e2αTλ2N(ϵ)=(n(ϵ))a, where 0<a<2μ/d. Then we obtain λ2N(ϵ)=a2αTln(n(ϵ)). The sample size n(ϵ) is chosen as n(ϵ)=(1/ϵ)b/(a+1), (0<b<2). In this case, the error will be of order

    ϵtbT(a+1)max{ϵ2b;ϵba+1(2μda);ab2α(a+1)ln1ϵ}.

    Proof of Theorem 5.2. Let us define the integral equation

    uN=RN(Tt)ρ(x)SN(Tt)ξ(x)+TtSN(τt)F(x,τ,uN)dτ.

    Then, we have

    E˜UN(,t)u(,t)2H2E˜UN(,t)uN(,t)2H+2EuN(,t)u(,t)2H. (5.10)

    For easy tracking, we divide the above estimate into two main steps:

    Step 1. We have

    E˜UN(,t)uN(,t)2H3ERN(Tt)(˜ρϵnρ)2H+3ESN(Tt)(˜ξϵnξ)2H+3ETtSN(τt)(F(x,τ,˜UN)F(x,τ,uN))dτ2H.

    By Höder's inequality and the results in Lemma 5.1, we have

    E˜UN(,t)uN(,t)2H3CRe2α(Tt)λ2NE˜ρϵnρ2H+3CSe2α(Tt)λ2NE˜ξϵnξ2H+3TETtCSe2α(τt)λ2NF(,τ,˜UN)F(,τ,uN))2Hdτ.

    Use the results of Lemma 2.1 and the Lipschitz property of F, we have

    E˜UN(,t)uN(,t)2H3CRe2α(Tt)λ2N(ϵ2n+1λμn||ρ||2Hμ)+3CSe2α(Tt)λ2N(ϵ2n+1λμn||ξ||2Hμ)+3CSL2TETte2α(τt)λ2N˜UN(,τ)uN(,τ)2Hdτ. (5.11)

    Multiplying both sides (5.11) to e2αtλ2N, we derive that

    e2αtλ2NE˜UN(,t)uN(,t)2H3CRe2αTλ2N(ϵ2n+1λμn||ρ||2Hμ)+3CSe2αTλ2N(ϵ2n+1λμn||ξ||2Hμ)+3CSL2TETte2ατλ2N˜UN(,τ)uN(,τ)2Hdτ.

    Gronwall's inequality leads to

    e2αtλ2NE˜UN(,t)uN(,t)2He2αTλ2N[3CR(ϵ2n+1λμn||ρ||2Hμ)+3CS(ϵ2n+1λμn||ξ||2Hμ)]e3CSL2T(Tt). (5.12)

    Step 2. To evaluate the remining term, we define the truncation version of the solution u as following

    χNu(x,t)=RN(Tt)ρ(x)SN(Tt)ξ(x)+TtSN(τt)F(x,τ,u)dτ.

    Then, we have

    uN(,t)u(,t)2H2uN(,t)χNu(,t)2H+2χNu(,t)u(,t)2H. (5.13)

    Sub-step 1.1. By Höder's inequality, Lemma 5.1 and the Lipschitz property of F, we have

    uN(,t)χNu(,t)2H=TtSN(τt)(F(x,τ,uN)F(x,τ,u))dτ2HTTtCSe2α(τt)λ2NF(,τ,uN)F(,τ,u)2HdτCSL2TETte2α(τt)λ2NuN(,τ)u(,τ)2Hdτ. (5.14)

    Since uC([0,T];Gσ,2), then

    χNu(,t)u(,t)2H=p=N+1u(,t),φp2e2αtλNλ2Np=N+1e2αtλpλ2pu(,t),φp2e2αtλNλ2Nu(,t)L([0,T];Gσ,2). (5.15)

    Substituting (5.14) and (5.15) into (5.13), we have

    uN(,t)u(,t)2H2CSL2TETte2α(τt)λ2NuN(,τ)u(,τ)2Hdτ+2e2αtλNλ2NuL([0,T];Gσ,2).

    Multiplying both sides above formula to e2αtλN, we have

    e2αtλ2NuN(,t)u(,t)2H2CSL2TETte2ατλ2NuN(,τ)u(,τ)2Hdτ+2λ2NuL([0,T];Gσ,2).

    Using Gronwall's inequality, we obtain

    e2αtλ2NuN(,t)u(,t)2H(2λ2NuL([0,T];Gσ,2))e2CSL2T(Tt). (5.16)

    The proof is completed by combining (5.10), (5.12) and (5.16).

    We propose the general scheme of our numerical calculation. For simplicity, we fix T=1 and Ω=(0,π). The eigenelements of the Dirichlet problem for the Laplacian in Ω have the following form:

    φp=2πsin(px),λp=p2, for p=1,2,

    To find a numerical solution to Eq (5.2), we first need to define a set of Nx×Nt grid points in the domain Ω×[0,T]. Let Δx=π/Nx is the time step, Δt=1/Nt is the spatial step, the coordinates of the mesh points are xj=jΔx, j=0,,Nx, and ti=iΔt, i=0,,Nt, and the values of the regularized solution ˜UN(x,t) at these grid points are ˜UN(xj,ti)˜Uij, where we denote ˜Uij by the numerical estimate of the regularized solution ˜UN(x,t) of at the point (xj,ti).

    Initialization step. The numerical process starts when time t=T. Since ˜UN(x,T)=RN(0)˜ρϵn, then

    ˜UNtj˜UN(x,T)=Np=1˜ρϵn,φpφp(xj)=Np=1ρϵ,φpφp(xj),j=1,,Nx. (6.1)

    Iteration steps. For ti<T, we want to determine

    ˜UN(x,ti)=RN(Tti)˜ρϵnSN(Tti)˜ξϵn+TtiSN(τti)F(˜UN)(τ)dτI(ti), (6.2)

    where I(ti) is performed in backward time as following

    I(ti)=Np=1[Ttie(τti)k+pe(τti)kpΔpFp(˜UN)(τ)dτ]φp(x)=Np=1[Nt1k=itk+1tke(τti)k+pe(τti)kpΔpFp(˜UN)(tk+1)dτ]φp(x).

    It is worth pointing out that, the Simpson's rule leads to the approximation

    Fp(˜UN)(ti)=F(˜UN)(,ti),φpΔ3Nxh=1Ch[F(˜UN(xh,ti))φp(xh)],

    where

    Ch={1,if h=0 or h=Nx,2,if h0,hNx and h is odd,4,if h0,hNx and h is even.

    Error estimation. We use the absolute error estimation between the regularized solution and the exact solution as follows

    Err(ti)=(1Nx+1Nxj=0|u(xj,ti)˜UN(xj,ti)|2)1/2. (6.3)

    In this example, we fixed α=0.3, β=0.01 and present the inputs

    ρ(x)=e2sinx+e1sin2x;ξ(x)=2e2sinxe1sin2x,

    and source data F(x,t,u)=f(x,t)+11+u2, where

    f(x,t)=(42α+β)(e2tsinx+e4tsinxsin22x+e6tsin3x+2e5tsin2xsin2x)1+e2tsin22x+e4tsin2x+2e3tsinxsin2x+(116α+16β)(etsin2x+e3tsin22x+e5tsinxsin2x+2e4tsinxsin22x)1+e2tsin22x+e4tsin2x+2e3tsinxsin2x.

    It is easy to check that the exact solution of Problem (1.1)–(1.3) is given by u(x,t)=etsin2x+e2tsinx.

    Figure 1 compares ρ(x), ξ(x) with their estimates ˜ρϵn(x), ˜ξϵn(x), respectively. When ϵ tends to 0, the estimates are consistent with that of the exact ones. Figure 2 presents a 3D graph of the exact solution u and the regularized solution for the case ϵ=1E03. Figure 3 displays the numerical convergence for different values of ϵ and t.

    Figure 1.  Comparison between: ρ and ˜ρϵn; ξ and ˜ξϵn (n=100).
    Figure 2.  For ϵ=1E03: (a) - the exact solution; (b) - the regularized solution.
    Figure 3.  Comparison between regularized solution and exact solution.

    Table 1 shows the values of Err(t) from (6.3) calculated numerically. As a conclusion, our proposed regularization method works properly and the numerical solution method is also feasible in practice.

    Table 1.  Errors between regularized solution and exact solution for t=14; 12; 34.
    ϵ Err(14) Err(12) Err(34)
    5E01 1.071213E+00 2.464307E01 1.124781E01
    1E01 3.161161E02 5.976654E03 2.063242E03
    1E02 1.143085E04 1.761839E05 4.912298E05
    1E03 5.851911E06 2.820096E07 2.488333E10

     | Show Table
    DownLoad: CSV

    This research is supported by Industrial University of Ho Chi Minh City (IUH) under grant number 130/HD-DHCN. Nguyen Anh Tuan thanks the Van Lang University for the support.

    The authors declare no conflict of interest.



    [1] C. Huang, X. Long, J. Cao, Stability of antiperiodic recurrent neural networks with multiproportional delays, Math. Methods Appl. Sci., 43 (2020), 6093–6102. https://doi.org/10.1002/mma.6350 doi: 10.1002/mma.6350
    [2] X. Fu, F. Kong, Global exponential stability analysis of anti-periodic solutions of discontinuous bidirectional associative memory (BAM) neural networks with time-varying delays, Int. J. Nonlinear Sci. Numer. Simul., 21 (2020), 807–820. https://doi.org/10.1515/ijnsns-2019-0220 doi: 10.1515/ijnsns-2019-0220
    [3] N. Radhakrishnan, R. Kodeeswaran, R. Raja, C. Maharajan, A. Stephen, Global exponential stability analysis of anti-periodic of discontinuous BAM neural networks with time-varying delays, J. Phys.: Conf. Ser., 1850 (2021), 012098. https://doi.org/10.1088/1742-6596/1850/1/012098 doi: 10.1088/1742-6596/1850/1/012098
    [4] M. Khuddush, K. R. Prasad, Global exponential stability of almost periodic solutions for quaternion-valued RNNs with mixed delays on time scales, Bol. Soc. Mat. Mex., 28 (2022), 75. https://doi.org/10.1007/s40590-022-00467-y doi: 10.1007/s40590-022-00467-y
    [5] L. T. H. Dzung, L. V. Hien, Positive solutions and exponential stability of nonlinear time-delay systems in the model of BAM-Cohen-Grossberg neural networks, Differ. Equ. Dyn. Syst., 159 (2022). https://doi.org/10.1007/s12591-022-00605-y doi: 10.1007/s12591-022-00605-y
    [6] L. Li, D. W. C. Ho, J. Cao, J. Lu, Pinning cluster synchronization in an array of coupled neural networks under event-based mechanism, Neural Networks, 76 (2016), 1–12. https://doi.org/10.1016/j.neunet.2015.12.008 doi: 10.1016/j.neunet.2015.12.008
    [7] R. Li, X. Gao, J. Cao, Exponential synchronization of stochastic memristive neural networks with time-varying delays, Neural Process. Lett., 50 (2019), 459–475. https://doi.org/10.1007/s11063-019-09989-5 doi: 10.1007/s11063-019-09989-5
    [8] Y. Sun, L. Li, X. Liu, Exponential synchronization of neural networks with time-varying delays and stochastic impulses, Neural Networks, 132 (2020), 342–352. https://doi.org/10.1016/j.neunet.2020.09.014 doi: 10.1016/j.neunet.2020.09.014
    [9] Q. Xiao, T. Huang, Z. Zeng, Synchronization of timescale-type nonautonomous neural networks with proportional delays, IEEE Trans. Syst., Man, Cybern.: Syst., 52 (2021), 2167–2173. https://doi.org/10.1109/tsmc.2021.3049363 doi: 10.1109/tsmc.2021.3049363
    [10] J. Gao, L. Dai, Anti-periodic synchronization of quaternion-valued high-order Hopfield neural networks with delays, AIMS Math., 7 (2022), 14051–14075. https://doi.org/10.3934/math.2022775 doi: 10.3934/math.2022775
    [11] B. Liu, S. Gong, Periodic solution for impulsive cellar neural networks with time-varying delays in the leakage terms, Abstr. Appl. Anal., 2013 (2013), 1–10. https://doi.org/10.1155/2013/701087 doi: 10.1155/2013/701087
    [12] L. Peng, W. Wang, Anti-periodic solutions for shunting inhibitory cellular neural networks with time-varying delays in leakage terms, Neurocomputing, 111 (2013), 27–33. https://doi.org/10.1016/j.neucom.2012.11.031 doi: 10.1016/j.neucom.2012.11.031
    [13] Y. Xu, Periodic solutions of BAM neural networks with continuously distributed delays in the leakage terms, Neural Process. Lett., 41 (2015), 293–307. https://doi.org/10.1007/s11063-014-9346-9 doi: 10.1007/s11063-014-9346-9
    [14] H. Zhang, J. Shao, Almost periodic solutions for cellular neural networks with time-varying delays in leakage terms, Appl. Math. Comput., 219 (2013), 11471–11482. https://doi.org/10.1016/j.amc.2013.05.046 doi: 10.1016/j.amc.2013.05.046
    [15] P. Jiang, Z. Zeng, J. Chen, Almost periodic solutions for a memristor-based neural networks with leakage, time-varying and distributed delays, Neural Networks, 68 (2015), 34–45. https://doi.org/10.1016/j.neunet.2015.04.005 doi: 10.1016/j.neunet.2015.04.005
    [16] H. Zhou, Z. Zhou, W. Jiang, Almost periodic solutions for neutral type BAM neural networks with distributed leakage delays on time scales, Neurocomputing, 157 (2015), 223–230. https://doi.org/10.1016/j.neucom.2015.01.013 doi: 10.1016/j.neucom.2015.01.013
    [17] M. Song, Q. Zhu, H. Zhou, Almost sure stability of stochastic neural networks with time delays in the leakage terms, Discrete Dyn. Nat. Soc., 2016 (2016), 1–10. https://doi.org/10.1155/2016/2487957 doi: 10.1155/2016/2487957
    [18] H. Li, H. Jiang, J. Cao, Global synchronization of fractional-order quaternion-valued neural networks with leakage and discrete delays, Neurocomputing, 383 (2020), 211–219. https://doi.org/10.1016/j.neucom.2019.12.018 doi: 10.1016/j.neucom.2019.12.018
    [19] W. Zhang, H. Zhang, J. Cao, H. Zhang, D. Chen, Synchronization of delayed fractional-order complex-valued neural networks with leakage delay, Phys. A: Stat. Mech. Appl., 556 (2020), 124710. https://doi.org/10.1016/j.physa.2020.124710 doi: 10.1016/j.physa.2020.124710
    [20] A. Singh, J. N. Rai, Stability analysis of fractional order fuzzy cellular neural networks with leakage delay and time varying delays, Chinese J. Phys., 73 (2021), 589–599. https://doi.org/10.1016/j.cjph.2021.07.029 doi: 10.1016/j.cjph.2021.07.029
    [21] C. Xu, M. Liao, P. Li, S. Yuan, Impact of leakage delay on bifurcation in fractional-order complex-valued neural networks, Chaos, Solitons Fract., 142 (2021), 110535. https://doi.org/10.1016/j.chaos.2020.110535 doi: 10.1016/j.chaos.2020.110535
    [22] C. Xu, Z. Liu, C. Aouiti, P. Li, L. Yao, J. Yan, New exploration on bifurcation for fractional-order quaternion-valued neural networks involving leakage delays, Cogn. Neurodyn., 16 (2022), 1233–1248. https://doi.org/10.1007/s11571-021-09763-1 doi: 10.1007/s11571-021-09763-1
    [23] C. A. Popa, Octonion-valued neural networks, In: A. Villa, P. Masulli, A. Pons Rivero, Artificial Neural Networks and Machine Learning–ICANN 2016, Lecture Notes in Computer Science, Cham: Springer, 2016,435–443. https://doi.org/10.1007/978-3-319-44778-0_51
    [24] J. C. Baez, The octonions, Bull. Am. Math. Soc., 39 (2002), 145–205.
    [25] A. K. Kwaśniewski, Glimpses of the octonions and quaternions history and today's applications in quantum physics, Adv. Appl. Clifford Algebras, 22 (2012), 87–105. https://doi.org/10.1007/s00006-011-0299-z doi: 10.1007/s00006-011-0299-z
    [26] C. A. Popa, Global asymptotic stability for octonion-valued neural networks with delay, In: F. Cong, A. Leung, Q. Wei, Advances in Neural Networks–ISNN 2017, Lecture Notes in Computer Science, Cham: Springer, 2017,439–448. https://doi.org/10.1007/978-3-319-59072-1_52
    [27] C. A. Popa, Exponential stability for delayed octonion-valued recurrent neural networks, In: I. Rojas, G. Joya, A. Catala, Advances in Computational Intelligence–IWANN 2017, Lecture Notes in Computer Science, Cham: Springer, 2017,375–385. https://doi.org/10.1007/978-3-319-59153-7_33
    [28] C. A. Popa, Global exponential stability of octonion-valued neural networks with leakage delay and mixed delays, Neural Networks, 105 (2018), 277–293. https://doi.org/10.1016/j.neunet.2018.05.006 doi: 10.1016/j.neunet.2018.05.006
    [29] C. A. Popa, Global exponential stability of neutral-type octonion-valued neural networks with time-varying delays, Neurocomputing, 309 (2018), 117–133. https://doi.org/10.1016/j.neucom.2018.05.004 doi: 10.1016/j.neucom.2018.05.004
    [30] J. Wang, X. Liu, Global μ-stability and finite-time control of octonion-valued neural networks with unbounded delays, arXiv Preprint, 2020. https://doi.org/10.48550/arXiv.2003.11330
    [31] M. S. M'hamdi, C. Aouiti, A. Touati, A. M. Alimi, V. Snasel, Weighted pseudo almost-periodic solutions of shunting inhibitory cellular neural networks with mixed delays, Acta Math. Sci., 36 (2016), 1662–1682. https://doi.org/10.1016/s0252-9602(16)30098-4 doi: 10.1016/s0252-9602(16)30098-4
    [32] G. Yang, W. Wan, Weighted pseudo almost periodic solutions for cellular neural networks with multi-proportional delays, Neural Process. Lett., 49 (2019), 1125–1138. https://doi.org/10.1007/s11063-018-9851-3 doi: 10.1007/s11063-018-9851-3
    [33] X. Yu, Q. Wang, Weighted pseudo-almost periodic solutions for shunting inhibitory cellular neural networks on time scales, Bull. Malay. Math. Sci. Soc., 42 (2019), 2055–2074. https://doi.org/10.1007/s40840-017-0595-4 doi: 10.1007/s40840-017-0595-4
    [34] C. Huang, H. Yang, J. Cao, Weighted pseudo almost periodicity of multi-proportional delayed shunting inhibitory cellular neural networks with D operator, Discrete Cont. Dyn. Syst.-Ser. S, 14 (2020), 1259–1272. https://doi.org/10.3934/dcdss.2020372 doi: 10.3934/dcdss.2020372
    [35] M. Ayachi, Existence and exponential stability of weighted pseudo-almost periodic solutions for genetic regulatory networks with time-varying delays, Int. J. Biomath., 14 (2021), 2150006. https://doi.org/10.1142/s1793524521500066 doi: 10.1142/s1793524521500066
    [36] M. M'hamdi, On the weighted pseudo almost-periodic solutions of static DMAM neural network, Neural Process. Lett., 54 (2022), 4443–4464. https://doi.org/10.1007/s11063-022-10817-6 doi: 10.1007/s11063-022-10817-6
    [37] A. Fink, Almost periodic differential equations, Berlin: Springer, 1974. https://doi.org/10.1007/BFb0070324
  • This article has been cited by:

    1. Khalid K. Ali, K. R. Raslan, Amira Abd-Elall Ibrahim, Mohamed S. Mohamed, On study the fractional Caputo-Fabrizio integro differential equation including the fractional q-integral of the Riemann-Liouville type, 2023, 8, 2473-6988, 18206, 10.3934/math.2023925
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1389) PDF downloads(56) Cited by(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog