Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Bridge the gap between fixed-length and variable-length evolutionary neural architecture search algorithms

  • Evolutionary neural architecture search (ENAS) aims to automate the architecture design of deep neural networks (DNNs). In recent years, various ENAS algorithms have been proposed, and their effectiveness has been demonstrated. In practice, most ENAS methods based on genetic algorithms (GAs) use fixed-length encoding strategies because the generated chromosomes can be directly processed by the standard genetic operators (especially the crossover operator). However, the performance of existing ENAS methods with fixed-length encoding strategies can also be improved because the optimal depth is regarded as a known priori. Although variable-length encoding strategies may alleviate this issue, the standard genetic operators are replaced by the developed operators. In this paper, we proposed a framework to bridge this gap and to improve the performance of existing ENAS methods based on GAs. First, the fixed-length chromosomes were transformed into variable-length chromosomes with the encoding rules of the original ENAS methods. Second, an encoder was proposed to encode variable-length chromosomes into fixed-length representations that can be efficiently processed by standard genetic operators. Third, a decoder cotrained with the encoder was adopted to decode those processed high-dimensional representations which cannot directly describe architectures into original chromosomal forms. Overall, the performances of existing ENAS methods with fixed-length encoding strategies and variable-length encoding strategies have both improved by the proposed framework, and the effectiveness of the framework was justified through experimental results. Moreover, ablation experiments were performed and the results showed that the proposed framework does not negatively affect the original ENAS methods.

    Citation: Yunhong Gong, Yanan Sun, Dezhong Peng, Xiangru Chen. Bridge the gap between fixed-length and variable-length evolutionary neural architecture search algorithms[J]. Electronic Research Archive, 2024, 32(1): 263-292. doi: 10.3934/era.2024013

    Related Papers:

    [1] Asghar Ahmadkhanlu, Hojjat Afshari, Jehad Alzabut . A new fixed point approach for solutions of a p-Laplacian fractional q-difference boundary value problem with an integral boundary condition. AIMS Mathematics, 2024, 9(9): 23770-23785. doi: 10.3934/math.20241155
    [2] Djamila Chergui, Taki Eddine Oussaeif, Merad Ahcene . Existence and uniqueness of solutions for nonlinear fractional differential equations depending on lower-order derivative with non-separated type integral boundary conditions. AIMS Mathematics, 2019, 4(1): 112-133. doi: 10.3934/Math.2019.1.112
    [3] Cuiying Li, Rui Wu, Ranzhuo Ma . Existence of solutions for Caputo fractional iterative equations under several boundary value conditions. AIMS Mathematics, 2023, 8(1): 317-339. doi: 10.3934/math.2023015
    [4] Bashir Ahmad, Manal Alnahdi, Sotiris K. Ntouyas, Ahmed Alsaedi . On a mixed nonlinear boundary value problem with the right Caputo fractional derivative and multipoint closed boundary conditions. AIMS Mathematics, 2023, 8(5): 11709-11726. doi: 10.3934/math.2023593
    [5] Isra Al-Shbeil, Abdelkader Benali, Houari Bouzid, Najla Aloraini . Existence of solutions for multi-point nonlinear differential system equations of fractional orders with integral boundary conditions. AIMS Mathematics, 2022, 7(10): 18142-18157. doi: 10.3934/math.2022998
    [6] Yujun Cui, Chunyu Liang, Yumei Zou . Existence and uniqueness of solutions for a class of fractional differential equation with lower-order derivative dependence. AIMS Mathematics, 2025, 10(2): 3797-3818. doi: 10.3934/math.2025176
    [7] Yitao Yang, Dehong Ji . Properties of positive solutions for a fractional boundary value problem involving fractional derivative with respect to another function. AIMS Mathematics, 2020, 5(6): 7359-7371. doi: 10.3934/math.2020471
    [8] Xiulin Hu, Lei Wang . Positive solutions to integral boundary value problems for singular delay fractional differential equations. AIMS Mathematics, 2023, 8(11): 25550-25563. doi: 10.3934/math.20231304
    [9] Xiping Liu, Mei Jia, Zhanbing Bai . Nonlocal problems of fractional systems involving left and right fractional derivatives at resonance. AIMS Mathematics, 2020, 5(4): 3331-3345. doi: 10.3934/math.2020214
    [10] Najla Alghamdi, Bashir Ahmad, Esraa Abed Alharbi, Wafa Shammakh . Investigation of multi-term delay fractional differential equations with integro-multipoint boundary conditions. AIMS Mathematics, 2024, 9(5): 12964-12981. doi: 10.3934/math.2024632
  • Evolutionary neural architecture search (ENAS) aims to automate the architecture design of deep neural networks (DNNs). In recent years, various ENAS algorithms have been proposed, and their effectiveness has been demonstrated. In practice, most ENAS methods based on genetic algorithms (GAs) use fixed-length encoding strategies because the generated chromosomes can be directly processed by the standard genetic operators (especially the crossover operator). However, the performance of existing ENAS methods with fixed-length encoding strategies can also be improved because the optimal depth is regarded as a known priori. Although variable-length encoding strategies may alleviate this issue, the standard genetic operators are replaced by the developed operators. In this paper, we proposed a framework to bridge this gap and to improve the performance of existing ENAS methods based on GAs. First, the fixed-length chromosomes were transformed into variable-length chromosomes with the encoding rules of the original ENAS methods. Second, an encoder was proposed to encode variable-length chromosomes into fixed-length representations that can be efficiently processed by standard genetic operators. Third, a decoder cotrained with the encoder was adopted to decode those processed high-dimensional representations which cannot directly describe architectures into original chromosomal forms. Overall, the performances of existing ENAS methods with fixed-length encoding strategies and variable-length encoding strategies have both improved by the proposed framework, and the effectiveness of the framework was justified through experimental results. Moreover, ablation experiments were performed and the results showed that the proposed framework does not negatively affect the original ENAS methods.



    In this research, we mainly focused on wave equation to study and examine the coupled system. In this system, we assumed a bounded domain ΩRN where Ω indicates sufficiently smooth boundary of ΩRN and take the positive constants ξ0,ξ1,σ,β1,β3 where m1 for N=1,2, and 1<mN+2N2 for N3. The coupled system with these terms is given by

    {vtt(ξ0+ξ1v22+δ(v,vt)L2(Ω))Δv(t)+0g1(s)Δv(ts)ds+β1|vt(t)|m2vt(t)+τ2τ1|β2(r)||vt(tr)|m2vt(tr)dr+f1(v,w)=0.wtt(ξ0+ξ1w22+δ(w,wt)L2(Ω))Δw(t)+0g2(s)Δw(ts)ds+β3|wt(t)|m2wt(t)+τ2τ1|β4(r)||wt(tr)|m2wt(tr)dr+f2(v,w)=0.v(z,t)=v0(z),vt(z,0)=v1(z),w(z,t)=w0(z),wt(z,0)=w1(z),inΩvt(z,t)=j0(z,t),wt(z,t)=ϱ0(z,t),inΩ×(0,τ2)v(z,t)=w(z,t)=0,inΩ×(0,) (1.1)

    in which G=Ω×(τ1,τ2)×(0,) and τ1<τ2 are taken to be non-negative constants in a manner that β2, β4:[τ1,τ2]R indicates distributive time delay while gi, i=1,2 are positive.

    The viscoelastic damping term, whose kernel is the function g, is a physical term used to describe the link between the strain and stress histories in a beam that was inspired by the Boltzmann theory. There are several publications that discuss this subject and produce a lot of fresh and original findings [1,2,3,4,5], particularly the hypotheses regarding the initial condition [6,7,8,9,10,11,12] and the kernel. See [13,14,15,16,17]. As it concerns to the plate equation and the span problem, Balakrishnan and Taylor introduced a novel damping model in [18] that they dubbed the Balakrishnan-Taylor damping. Here are a few studies that specifically addressed the research of this dampening for further information [18,19,20,21,22,23].

    Several applications and real-world issues are frequently affected by the delay, which transforms numerous systems into interesting research topics. Numerous writers have recently studied the stability of the evolution systems with time delays, particularly the effect of distributed delay. See [24,25,26].

    In [1], the authors presented the stability result of the system over a considerably broader class of kernels in the absence of delay and Balakrishnan-Taylor damping ξ0=1,ξ1=δ=βi=0,i=1,,4.

    Based on everything said above, one specific problem may be solved by combining these damping terms (distributed delay terms, Balakrishnan-Taylor damping and infinite memory), especially when the past history and the distributed delay

    τ2τ1|βi(r)||ut(tr)|m2ut(tr)dr,    i=2,4

    are added. We shall attempt to throw light on it since we think it represents a fresh topic that merits investigation and analysis in contrast to the ones mentioned before. Our study is structured into multiple sections: in the second section, we establish the assumptions, notions, and lemmas we require; in the final section, we substantiate our major finding.

    In this section of the paper, we will introduce some basic results related to the theory for the analysis of our problem. Let us take the below:

    (G1) hi:R+R+ are a non-increasing C1 functions fulfills the following

    gi(0)>0,,ξ00hi(s)ds=li>0,i=1,2, (2.1)

    and

    g0=0h1(s)ds,ˆg0=0g2(s)ds,

    (G2) One can find a function C1 functions Gi:R+R+ holds true Gi(0)=Gi(0)=0.

    The functions Gi(t) are strictly increasing and convex of class C2(R+) on (0,ϱ],rgi(0) or linear in a manner that

    gi(t)ζi(t)Gi(gi(t)),t0,fori=1,2, (2.2)

    in which ζi(t) are a C1 functions fulfilling the below

    ζi(t)>0,ζi(t)0,t0. (2.3)

    (G3) β2, β4:[τ1,τ2]R are a bounded function fulfilling the below

    τ2τ1|β2(r)|dr<β1,τ2τ1|β4(r)|dr<β3. (2.4)

    (G4) fi:R2R are C1 functions with fi(0,0)=0, and one can find a function F in a way that

    f1(c,e)=dFdc(c,e),f2(c,e)=dFde(c,e),F0,af1(c,e)+ef2(c,e)=F(c,e)0, (2.5)

    and

    dfidc(c,e)+dfide(c,e)d(1+cpi1+epi1).(c,e)R2. (2.6)

    Take the below

    (gϕ)(t):=Ω0h(r)|ϕ(t)ϕ(tr)|2drdz,

    and

    M1(t):=(ξ0+ξ1v22+δ(v(t),vt(t))L2(Ω)),M2(t):=(ξ0+ξ1w22+δ(w(t),wt(t))L2(Ω)).

    Lemma 2.1. (Sobolev-Poincare inequality [27]). Assume that 2q< for n=1,2 and 2q<2nn2 for n3. Then, one can find c=c(Ω,q)>0 in a manner that

    vqcv2,vG10(Ω).

    Moreover, choose the below as in [26]:

    x(z,ρ,r,t)=vt(z,trρ),y(z,ρ,r,t)=wt(z,trρ)

    with

    {rxt(z,ρ,r,t)+xρ(z,ρ,r,t)=0,syt(z,ρ,r,t)+yρ(z,ρ,r,t)=0x(z,0,r,t)=vt(z,t),y(z,0,r,t)=wt(z,t). (2.7)

    Take the auxiliary variable (see [28])

    ηt(z,s)=v(z,t)v(z,ts),s0,ϑt(z,s)=w(z,t)w(z,ts),s0.

    Then

    ηtt(z,s)+ηts(z,s)=vt(z,t),ϑtt(z,s)+ϑts(z,s)=wt(z,t). (2.8)

    Rewrite the problem (1.1) as follows

    {vtt(l1+ξ1v22+δ(v,vt)L2(Ω))Δv(t)+0g1(s)Δηt(s)ds+β1|vt(t)|m2vt(t)+τ2τ1|β2(s)||x(z,1,r,t)|m2x(z,1,r,t)dr+f1(v,w)=0,wtt(l2+ξ1w22+δ(w,wt)L2(Ω))Δw(t)+0g2(s)Δϑt(s)ds+β3|wt(t)|m2wt(t)+τ2τ1|β4(r)||y(z,1,r,t)|m2y(z,1,r,t)dr+f2(v,w)=0,rxt(z,ρ,r,t)+xρ(z,ρ,r,t)=0,ryt(z,ρ,r,t)+yρ(z,ρ,r,t)=0,ηtt(z,s)+ηts(z,s)=vt(z,t)ϑtt(z,s)+ϑts(z,s)=wt(z,t), (2.9)

    where

    (z,ρ,r,t)Ω×(0,1)×(τ1,τ2)×(0,).

    with

    {v(z,t)=v0(z),vt(z,0)=v1(z),w(z,t)=w0(z),wt(z,0)=w1(z),inΩx(z,ρ,r,0)=j0(z,ρr),y(z,ρ,r,0)=ϱ0(z,ρr),inΩ×(0,1)×(0,τ2)v(z,t)=ηt(z,s)=0,zΩ,t,s(0,),ηt(z,0)=0,t0,η0(z,s)=η0(s)=0,s0,w(z,t)=ϑt(z,s)=0,zΩ,t,s(0,),ϑt(z,0)=0,t0,ϑ0(z,s)=ϑ0(s)=0,s0. (2.10)

    In the upcoming Lemma, the energy functional will be introduced.

    Lemma 2.2. Let the energy functional is symbolized by E, then it is given by

    E(t)=12(vt22+wt22)+ξ14(v(t)42+w(t)42)+ΩF(v,w)dz+12(l1v(t)22+l2w(t)22)+12((g1v)(t)+(g2w)(t))+m1m10τ2τ1s(|β2(r)|x(z,ρ,r,t)mm+|β4(r)|y(z,ρ,r,t)mm)drdρ. (2.11)

    The above fulfills the below

    E(t)γ0(vt(t)mm+wt(t)mm)+12((g1v)(t)+(g2w)(t))δ4{(ddt{v(t)22})2+(ddt{w(t)22})2}0, (2.12)

    in which γ0=min{β1τ2τ1|β2(r)|dr,β3τ2τ1|β4(r)|dr}.

    Proof. To prove the result, we take the inner product of (2.9) with vt,wt and after that integrating over Ω, the following is obtained

    (vtt(t),vt(t))L2(Ω)(M3(t)Δv(t),vt(t))L2(Ω)+(0h1(s)Δηt(s)ds,vt(t))L2(Ω)+β1(|vt|m2vt,vt)L2(Ω)+τ2τ1|β2(s)|(|x(z,1,r,t)|m2x(z,1,r,t),vt(t))L2(Ω)dr+(wtt(t),wt(t))L2(Ω)(M4(t)Δw(t),wt(t))L2(Ω)+(0h2(s)Δϑt(s)ds,wt(t))L2(Ω)+β3(|wt|m2wt,wt)L2(Ω)+τ2τ1|β4(s)|(|y(z,1,r,t)|m2y(z,1,r,t),wt(t))L2(Ω)dr+(f1(v,w),vt(t))L2(Ω)+(f2(v,w),wt(t))L2(Ω)=0. (2.13)

    in which

    M3(t):=(l1+ξ1v22+δ(v(t),vt(t))L2(Ω)),M4(t):=(l2+ξ1w22+δ(w(t),wt(t))L2(Ω)).

    Using mathematical skills, the following is obtained

    (vtt(t),vt(t))L2(Ω)=12ddt(vt(t)22), (2.14)

    further simplification leads us to the following

    (M3(t)Δv(t),vt(t))L2(Ω)=((l1+ξ1v22+δ(v(t),vt(t))L2(Ω))Δv(t),vt(t))L2(Ω)=(l1+ξ1v22+δ(v(t),vt(t))L2(Ω))Ωv(t).vt(t)dz=(l1+ξ1v22+δ(v(t),vt(t))L2(Ω))ddt{Ω|v(t)|2dz}=ddt{12(l1+ξ12v22)v(t)22}+δ4ddt{v(t)22}2. (2.15)

    The following is obtained after calculation

    (0g1(s)Δηt(s)ds,vt(t))L2(Ω)=Ωvt0g1(s)ηt(s)dsdz=0g1(s)Ωvtηt(s)dzds=0g1(s)Ω(ηtt+ηts)ηt(s)dzds=0g1(s)Ωηttηt(s)dzds+Ω0g1(s)ηtsηt()ddz=12ddt(g1v)(t)12(g1v)(t). (2.16)

    In the same way, we have

    (wtt(t),wt(t))L2(Ω)=12ddt(wt(t)22),(M4(t)Δw(t),wt(t))L2(Ω)=ddt{12(l2+ξ12w22)w(t)22}+δ4ddt{w(t)22}2,(0g2(s)Δϑt(s)ds,wt(t))L2(Ω)=12ddt(g2w)(t)12(g2w)(t). (2.17)

    Now, multiplying the equation (2.9) by x|β2(r)|,y|β4(r)|, and integrating over Ω×(0,1)×(τ1,τ2) and utilizing (2.7), the below is obtained

    ddtm1mΩ10τ2τ1r|β2(r)|.|x(z,ρ,r,t)|mdrdρdz=(m1)Ω10τ2τ1|β2(r)|.|y|m1xρdrdρdz=m1mΩ10τ2τ1|β2(r)|ddρ|x(z,ρ,r,t)|mdrdρdz=m1mΩτ2τ1|β2(r)|(|x(z,0,r,t)|m|x(z,1,r,t)|m)drdz=m1m(τ2τ1|β2(r)|dr)Ω|vt(t)|mdzm1mΩτ2τ1|β2(r)|.|x(z,1,r,t)|mdrdz=m1m(τ2τ1|β2(r)|dr)vt(t)mmm1mτ2τ1|β2(r)|x(z,1,r,t)mmdr. (2.18)

    Similarly, we have

    ddtm1mΩ10τ2τ1r|β4(r)|.|y(z,ρ,r,t)|mdrdρdz=m1m(τ2τ1|β4(r)|dr)wt(t)mmm1mτ2τ1|β4(r)|y(z,1,r,t)mmdr. (2.19)

    Here, we utilize the inequalities of Young as

    τ2τ1|β2(r)|(|x(z,1,r,t)|m2x(z,1,r,t),vt(t))L2(Ω)ds1m(τ2τ1|β2(r)|dr)vt(t)mm+m1mτ2τ1|β2(r)|x(z,1,r,t)mmdr, (2.20)

    and

    τ2τ1|β4(r)|(|y(z,1,r,t)|m2y(z,1,r,t),wt(t))L2(Ω)dr1m(τ2τ1|β4(r)|dr)wt(t)mm+m1mτ2τ1|β4(r)|y(z,1,r,t)mmdr. (2.21)

    Finally, we have

    (f1(v,w),vt(t))L2(Ω)+(f2(v,w),wt(t))L2(Ω)=ddtΩF(v,w)dz. (2.22)

    Thus, after replacement of (2.14)–(2.22) into (2.13), we determined (2.11) and (2.12). As a result, we obtained that E is a non-increasing function by (2.2)–(2.5), which is required.

    Theorem 2.3. Take the function U=(v,vt,w,wt,x,y,ηt,ϑt)T and assume that (2.1)–(2.5) holds true. Then, for any U0H, then one can find a unique solution U of problems (2.9) and (2.10) in a manner that

    UC(R+,G).

    If U0G1, then U fulfills the following

    UC1(R+,G)C(R+,G1),

    in which

    G=(G10(Ω)×L2(Ω))2×(L2(Ω,(0,1),(τ1,τ2)))2×(Lg1×Lg2).G1={UG/v,wG2G10,vt,wtG10(Ω),x,y,xρ,yρL2(Ω,(0,1),(τ1,τ2)),(ηt,ϑt)Lg1×Lg2,ηt(z,0)=ϑt(z,0)=0,x(z,0,r,t)=vt,y(z,0,r,t)=wt}.

    Here, the stability of the systems (2.9) and (2.10) will be established and investigated. For which the following lemma is needed

    Lemma 3.1. Let us suppose that (2.1) and (2.2) fulfills.

    Ω(0gi(s)(v(t)v(ts))ds)2dzCκ,i(hiv)(t),i=1,2. (3.1)

    where

    Cκi:=0g2i(s)κgi(s)gi(s)dshi(t):=κgi(t)gi(t),i=1,2.

    Proof.

    Ω(0gi(s)(v(t)v(ts))ds)2dz=Ω(tgi(ts)(v(t)v(ts))ds)2dz=Ω(tgi(ts)κgi(ts)gi(ts)κgi(ts)gi(ts)(v(t)v(s))ds)2dz (3.2)

    which is obtained through Young's inequality (Eq 3.1).

    Lemma 3.2. (Jensens inequality). Let f:Ω[c,e] and h:ΩR are integrable functions in a manner that for any zΩ, h(z)>0 and Ωh(z)dz=k>0. Furthermore, assume a convex function G such that G:[c,e]R. Then

    G(1kΩf(z)h(z)dz)<1kΩG(f(z))h(z)dz. (3.3)

    Lemma 3.3. It is mentioned in [12] that one can find a positive constant β, ˆβ in a manner that

    I1(t)=Ωtg1(s)|ηt(δ)|2dsdzβμ(t),I2(t)=Ωtg2(s)|ϑt(δ)|2dsdzˆβˆμ(t), (3.4)

    in which

    μ(t)=0g1(t+s)(1+Ωv20(z,s)dz)ds,ˆμ(t)=0g2(t+s)(1+Ωw20(z,s)dz)ds.

    Proof. As the function E(t) is decreasing and utilizing (2.11), we have the following

    Ω|ηt(s)|2dz=Ω(v(z,t)v(z,ts)2dz2Ωv2(z,t)dz+2Ωv2(z,ts)dz2sups>0Ωv2(z,s)dz+2Ωv2(z,tx)dz4E(0)l1+2Ωv2(z,ts)dz, (3.5)

    for any t,s0. Further, we have

    I1(t)4E(0)l1tg1(s)ds+2tg1(s)Ωv2(z,ts)dzds4E(0)l10g1(t+s)ds+20g1(t+s)Ωv20(z,s)dzdsβμ(t), (3.6)

    in which β=max{4E(0)l1,2} and μ(t)=0g1(t+s)(1+Ωu20(z,s)dz)ds.

    In the same way, we can deduce that

    I2(t)4E(0)l20g2(t+s)ds+20g2(t+s)Ωw20(z,s)dzdsˆβˆμ(t), (3.7)

    in which ˆβ=max{4E(0)l2,2} and ˆμ(t)=0g2(t+s)(1+Ωw20(z,s)dz)ds. In the upcoming part, we set the following

    Ψ(t):=Ω(v(t)vt(t)+w(t)wt(t))dz+δ4(v(t)42+w(t)42), (3.8)

    and

    Φ(t):=Ωvt0g1(s)(v(t)v(ts))dsdzΩwt0g2(s)(w(t)w(ts))dsdz, (3.9)

    and

    Θ(t):=10τ2τ1reρr(|β2(r)|.x(z,ρ,r,t)mm+|β4(r)|.y(z,ρ,r,t)mm)drdρ. (3.10)

    Lemma 3.4. In (3.8), the functional Ψ(t) fulfills the following

    Ψ(t)vt22+wt22(lε(c1+c2)σ1)(v22+w22)ξ1(v42+w42)+c(ε)(vtmm+wtmm)+c(σ1)(Cκ,1(g1v)(t)+Cκ,2(h2w)(t))ΩF(v,w)dz+c(ε)τ2τ1(|β2(r)x(z,1,r,t)mm+|β4(r)y(z,1,r,t)mm)dr. (3.11)

    for any ε,σ1>0 with l=min{l1,l2}.

    Proof. To prove the result, differentiate (3.8) first and then apply (2.9), we have the following

    Ψ(t)=vt22+Ωvttvdz+δv22Ωvtvdz+wt22+Ωwttwdz+δw22Ωwtwdz=vt22+wt22ξ0(v22+w22)ξ1(v42+w42)β1Ω|vt|m2vtvdzI11β3Ω|wt|m2wtwdzI12+Ωv(t)0g1(s)v(ts)dsdzI21+Ωw(t)0g2(s)w(ts)dsdzI22Ωτ2τ1|β2(r)||x(z,1,r,t)|m2x(z,1,r,t)vdrdzI31Ωτ2τ1|β4(r)||y(z,1,r,t)|m2y(z,1,r,t)wdrdzI32Ω(vf1(v,w)+wf2(v,w))dzI4. (3.12)

    We estimate the last 6 terms of the RHS of (3.12). The following is obtained by applying Young's, Sobolev-Poincare and Hölder's inequalities on (2.1) and (2.11), we have

    I11εβm1vmm+c(ε)vtmmεβm1cmpvm2+c(ε)vtmmεβm1cmp(E(0)l1)(m2)/2v22+c(ε)vtmmεc11v22+c(ε)vtmm. (3.13)

    In addition to this, for any σ1>0, by Lemma 3.1, we have the below

    I21(0g1(s)ds)v22Ωv(t)0g1(s)(v(t)v(ts))dsdz(ξ0l1+σ1)v22+cσ1Cκ,1(h1v)(t). (3.14)

    Taking same steps to I12, the below is obtained

    I31εc21v22+c(ε)τ2τ1|β2(r)|.x(z,1,r,t)mmdr. (3.15)

    Same steps for I11,I21 and I31, we have

    I12εc12w22+c(ε)wtmmI22(ξ0l2+σ1)w22+cσ1Cκ,2(h2w)(t),I32εc22w22+c(ε)τ2τ1|β4(r)|.y(z,1,r,t)mmdr. (3.16)

    Combining (3.13)–(3.21), (3.12) and (2.5), the required (3.11) is obtained.

    Lemma 3.5. For any σ,σ2,σ3>0, the functional Φ(t) introduced in (3.9) holds true

    Φ(t)(l0σ3)(vt22+wt22)+ξ1σ(v42+w42)+σ(ξ0+^l02+cˆl)v22+σ(ξ0+ˆh20+cl2)w22+σ22δE(0)(1l1(12ddtv22)2+1l2(12ddtw22)2)+c(σ,σ2,σ3)(Cκ,1(h1v)(t)+Cκ,2(h2w)(t))+c(σ)(vtmm+τ2τ1|β2(r)x(z,1,r,t)mmdr)+c(σ)(wtmm+τ2τ1|β4(r)y(z,1,r,t)mmdr). (3.17)

    where ˆl=max{l1,l2}, l0=min{g0,ˆg0} and ^l0=max{g0,ˆg0}.

    Proof. To prove the result, simplification of (3.9) and (2.9) through mathematical skills leads us to the following

    Φ(t)=Ωvtt0g1(s)(v(t)v(ts))dsdzΩvtt(0g1(s)(v(t)v(ts))ds)dzΩwtt0g2(s)(w(t)w(ts))dsdzΩwtt(0g2(s)(w(t)w(ts))ds)dz=(ξ0+ξ1v22)Ωv0g1(s)(v(t)v(ts))dsdzJ11+(ξ0+ξ1w22)Ωw0g2(s)(w(t)w(ts))dsdzJ12+δΩvvtdz.Ωv0g1(s)(v(t)v(ts))dsdzJ21+δΩwwtdz.Ωw0g2(s)(w(t)w(ts))dsdzJ22Ω(0g1(s)v(ts)ds).(0g1(s)(v(t)v(ts))ds)dzJ31Ω(0g2(s)w(ts)ds).(0g2(s)(w(t)w(ts))ds)dzJ32β1Ω|vt|m2vt(0g1(s)(v(t)v(ts))ds)dzJ41β3Ω|wt|m2wt(0g2(s)(w(t)w(ts))ds)dxJ42Ωτ2τ1|β2(r)||x(z,1,r,t)|m2x(z,1,r,t)×0g1(s)(v(t)v(ts))ds)dsdzJ51Ωτ2τ1|β4(r)||y(z,1,r,t)|m2y(z,1,r,t)×0g2(s)(w(t)w(ts))ds)dsdzJ51Ωvtt(0g(s)(v(t)v(ts))ds)dzJ61Ωwtt(0g2(s)(w(t)w(ts))ds)dzJ62Ωf1(v,w).(0g1(s)(v(t)v(ts))ds)dzJ71Ωf2(v,w).(0g2(s)(w(t)w(ts))ds)dzJ72. (3.18)

    Here, we will find our the approximation of the terms of the RHS of (3.18). Using the well-known Young's, Sobolev-Poincare and Hölder's inequalities on (2.1), (2.11) and Lemma 3.1, we proceed as follows

    |J11|(ξ0+ξ1v22)(σv22+14σCκ,1(h1v)(t))σξ0v22+σξ1v42+(ξ04σ+ξ1E(0)4l1ξ)Cκ,1(h1v)(t), (3.19)

    and

    J21σ2δ(Ωvvtdz)2v22+δ4σ2Cκ,1(h1v)(t)σ22δE(0)l1(12ddtv22)2+δ4σ2Cκ,1(h1v)(t), (3.20)
    |J31|Ω(0g1(s)v(t)ds)(0g1(s)(v(ts)v(t))ds)dzΩ(0g1(s)(v(t)v(ts))ds)2dzδg20v22+(1+14δ)Cκ,1(h1v)(t), (3.21)
    |J41|c(σ)vtmm+σβm1Ω(0g1(s)(v(t)v(ts))ds)mdzc(σ)vtmm+σ(βm1cmp[4g0E(0)l1](m2))Cκ,1(h1v)(t)c(σ)vtmm+σc3Cκ,1(h1v)(t). (3.22)

    In the same, we obtained the following

    J51c(σ)x(z,1,r,t)mm+σc4Cκ,1(h1v)(t), (3.23)

    and to find the approximation of J61, we have

    t(0g1(s)(v(t)v(ts))ds)=t(tg1(ts)(v(t)v(s))ds)=tg1(ts)(v(t)v(s))ds+(tg1(ts)ds)vt(t)=0g1(s)(v(t)v(ts))ds+g0vt(t),

    the (2.2) implies that

    J61(g0σ3)vt22+cσ3Cκ,1(h1v)(t). (3.24)

    In the same steps, the estimation of Ji2, i=1,..,6 are obtained and

    J71cσl1v22+c(σ)Cκ,1(h1v)(t)J72cσl2w22+c(σ)Cκ,2(h2v)(t). (3.25)

    Here, put (3.19)–(3.25) into (3.18), the required result is obtained.

    Lemma 3.6. The functional Θ(t) introduced in (3.10) fulfills the below

    Θ(t)γ110τ2τ1r(|β2(r)|.x(z,ρ,r,t)mm+|β4(r)|.y(z,ρ,r,t)mm)drdργ1τ2τ1(|β2(s)|.x(z,1,r,t)mm+|β4(r)|.y(z,1,r,t)mm)dr+β5(vt(t)mm+wt(t)mm). (3.26)

    in which β5=max{β1,β3}.

    Proof. To prove the result, using Θ(t), and (2.9), we obtained the following

    Θ(t)=mΩ10τ2τ1erρ|β2(r)|.|x|m1xρ(z,ρ,r,t)drdρdzmΩ10τ2τ1erρ|β4(r)|.|y|m1yρ(z,ρ,r,t)drdρdz=Ω10τ2τ1rerρ|β2(r)|.|x(z,ρ,r,t)|mdrdρdzΩτ2τ1|β2(r)|[er|x(z,1,r,t)|m|x(z,0,r,t)|m]drdzΩ10τ2τ1rerρ|β4(r)|.|y(z,ρ,r,t)|mdrdρdzΩτ2τ1|β4(r)|[er|y(z,1,r,t)|m|y(z,0,r,t)|m]drdz

    Utilizing x(z,0,r,t)=vt(z,t),y(z,0,r,t)=wt(z,t), and ererρ1, for any 0<ρ<1, moreover, select γ1=eτ2, we have

    Θ(t)γ1Ω10τ2τ1r(|β2(r)|.|z(z,ρ,r,t)|m+|β4(r)|.|y(z,ρ,r,t)|m)drdρdzγ1Ωτ2τ1(|β2(r)||x(z,1,r,t)|m+|β4(r)||y(z,1,r,t)|m)drdz+τ2τ1|β2(r)|drΩ|vt|m(t)dz+τ2τ1|β4(r)|drΩ|wt|m(t)dz,

    applying (2.4), the required proof is obtained. In the next step, we below functional are introduced

    A1(t):=Ωt0φ1(ts)v(s)2dsdz,A2(t):=Ωt0φ2(ts)w(s)2dsdz, (3.27)

    in which φ1(t)=tg1(s)ds,φ2(t)=tg2(s)ds.

    Lemma 3.7. Let us suppose that (2.1) and (2.2) satisfied. Then, the functional F1=A1+A2 and fulfills the following

    F1(t)12((g1v)(t)+(g2w)(t))+3g0Ωv2dz+3ˆg0Ωw2dz+12Ωtg1(s)(v(t)v(ts))2dsdz+12Ωtg2(s)(w(t)w(ts))2dsdz. (3.28)

    Proof. We can easily prove this lemma with the help of Lemma 3.7 in [13] and Lemma 3.4 in [15].

    Now, we have sufficient mathematical tools to prove the below mentioned Theorem.

    Theorem 3.8. Take (2.1)–(2.5), then one can find positive constants ςi,i=1,2,3 and positive function ς4(t) in a way that the energy functionalmentioned in (2.11) fulfills

    E(t)ς1D12(ς2+ς3t0ˆζ(ν)D4(ς4(ν)μ0(ν))dνt0ζ0(ν)dν), (3.29)

    in which

    D2(t)=tD(ε0t),D3(t)=tD1(t),D4(t)=¯D3(t), (3.30)

    and

    μ0=max{μ,ˆμ},ˆζ=max{ζ1,ζ2},ζ0=min{ζ1,ζ2},

    which are increasing and convex in (0, ϱ].

    Proof. For the proof, we define the below functional

    G(t):=NE(t)+N1Ψ(t)+N2Φ(t)+N3Θ(t), (3.31)

    we determined the positive constants N,Ni,i=1,2,3. Simplifying (3.36) and utilizing 2.12, the Lemmas 3.4–3.6, we have

    G(t):=NE(t)+N1Ψ(t)+N2Φ(t)+N3Θ(t){N2(l0σ3)N1}(vt22+wt22){N3ξ1N2ξ1σ}(v42+w42){N1(lε(c1+c2)σ1)N2σ(ξ0+^l02+cˆl)}(v22+w22){Nδ4N2σ22δE(0)l}[(12ddtv22)2+(12ddtw22)2]+{N1c(σ1)+N2c(σ,σ2,σ3)}(Cκ,1(h1v)(t)+Cκ,2(h2w)(t))+N2((g1v)(t)+(g2w)(t)){γ0NN1c(ε)N2c(σ)N3β5}(vtmm+wtmm)(γ1N3N1c(ε)N2c(σ))τ2τ1|β2(r)x(z,1,r,t)mmds)N3γ110τ2τ1r|β2(r)|.x(z,ρ,r,t)mmdrdρ(γ1N3N1c(ε)N2c(σ))τ2τ1|β4(r)y(z,1,r,t)mmdr)N3γ110τ2τ1r|β4(r)|.y(z,ρ,r,t)mmdrdρN1ΩF(v,w)dz. (3.32)

    We select the various constants at this point such that the values included in parenthesis are positive in this stage. Here, putting

    σ3=l02,ε=l4(c1+c2),σ1=l4,σ2=lN16E(0)N2,N1=l04N2.

    Thus, we arrive at

    H(t)l04N2(wt22+wt22)ζ1N2(l04δ)(w42+u42)N2(ll08δ(ζ0+^h02+cˆl))(w22+u22)Nδ8[(12ddtv22)2+(12ddtw22)2]+N2c(σ,σ1,σ2,σ3)(Cκ,1(h1v)(t)+Cκ,2(h2w)(t))+N2((g1v)(t)+(g2v)(t))N1ΩF(v,w)dz(γ0NN2c(σ,ε)N3β5)(vtmm+wtmm)(γ1N3N2c(σ,ε))τ2τ1|β2(r)x(z,1,r,t)mmds)N3γ110τ2τ1r|β2(r)|.x(z,ρ,r,t)mmdrdρ(γ1N3N2c(σ,ε))τ2τ1|β4(r)y(z,1,r,t)mmdr)N3γ110τ2τ1r|β4(r)|.y(z,ρ,r,t)mmdrdρ. (3.33)

    In the upcoming, we select σ in a manner that

    σ<min{l04,ll08(ξ0+^g02+cˆl)}.

    After that, we take N2 in a way that

    N2(ll08σ(ξ0+^g02+cˆl))>4l0,

    and take N3 large enough in a way that

    γ1N3N2c(σ,ε)>0.

    As a result, for positive constants di,i=1,2,3,4,5, (3.33) can be written as

    H(t)d1(vt22+wt22)d2(v42+w42)4l0(v22+w22)Nδ8[(12ddtv22)2+(12ddtw22)2](N2d3Cκ)((h1v)(t)+(h2w)(t))+Nκ2((g1v)(t)+(g2w)(t))(γ0Nc)(vtmm+wtmm)d5ΩF(v,w)dzd410τ2τ1s(|β2(r)|.x(z,ρ,r,t)mm+|β4(r)|.y(z,ρ,r,t)mm)drdρ, (3.34)

    in which Cκ=max{Cκ,1,Cκ,2}.

    We know that κg2i(s)κgi(s)gi(s)gi(s), then from from Lebesgue Dominated Convergence, we have the below

    limκ0+κCκ,i=limκ0+0κg2i(s)κgi(s)gi(s)ds=0,i=1,2 (3.35)

    which leads to

    limκ0+κCκ=0.

    As a result of this, one can find 0<κ0<1 in a manner that if κ<κ0, then

    κCκ1d3. (3.36)

    From (3.8)–(3.10) through mathematical skills, we have the following

    |H(t)NE(t)|N12(vt(t)22+wt(t)22+cpw(t)22+cpw(t)22)+δN14(v(t)42+w(t)42)+N22(vt(t)22+wt(t)22)+N22cp(Cκ,1(g1v)(t)+Cκ,2(g2w)(t))+N310τ2τ1reρr(|β2(r)|.x(z,ρ,r,t)mm+|β4(r)|.y(z,ρ,r,t)mm)drdρ. (3.37)

    By the fact eρr<1 and (2.2), we have the below

    |H(t)NE(t)|C(N1,N2,N3)E(t)=C1E(t). (3.38)

    that is

    (NC1)E(t)H(t)(N+C1)E(t). (3.39)

    Here, set κ=12N and take N large enough in a manner that

    NC1>0,,γ0Nc>0,12N12κ0>0,κ=12N<κ0,

    we find

    H(t)k2E(t)+14((g1v)(t)+(g2w)(t)) (3.40)

    for some k2>0, and

    c5E(t)H(t)c6E(t),t0 (3.41)

    for some c5,c6>0, we have

    H(t)E(t).

    After that, the below cases are considered:

    Case 3.9. Gi,i=1,2 are linear. Multiplying (3.40) by ζ0(t)=min{ζ1(t),ζ2(t)}, we find

    ζ0(t)H(t)k2ζ0(t)E(t)+14ζ0(t)((g1v)(t)+(g2w)(t))k2ζ0(t)E(t)+14ζ1(t)(g1v)(t)+14ζ2(t)(g2w)(t). (3.42)

    The last two terms in (3.42), we have

    ζ1(t)4(g1v)(t)=ζ1(t)4Ω0g1(δ)|ηt(s)|2dsdz=ζ1(t)4Ωt0g1(s)|ηt(s)|2dsdzI1+ζ1(t)4Ωtg1(s)|ηt(s)|2dsdzI2 (3.43)

    To estimate I1, using (2.11),

    I114Ωt0ζ1(s)g1(s)|ηt(s)|2dsdz=14Ωt0g1(s)|ηt(s)|2dsdz12l1E(t), (3.44)

    and by (3.4), we get

    I2β4ζ1(t)μ(t). (3.45)

    In the same way, we obtained

    ζ2(t)4(g2w)(t)12l2E(t)+ˆβ4ζ2(t)ˆμ(t). (3.46)

    As a result of this, we get

    ζ0(t)H(t)k2ζ0(t)E(t)1ˆlE(t)+2β0w(t), (3.47)

    where β0=max{β4,ˆβ4} and w(t)=ˆζ(t)μ0(t).

    Applying ζi(t)0, we get

    H1(t)k2ζ0(t)E(t)+2β0w(t), (3.48)

    with

    H1(t)=ζ0(t)H(t)+1ˆlE(t)E(t),

    we have

    k4E(t)H1(t)k5E(t), (3.49)

    then, the following is obtained from (3.48)

    k2E(T)T0ζ0(t)dtk2T0ζ0(t)E(t)dtH1(0)H1(T)+2β0T0w(t)dtH1(0)+2β0T0ˆζ(t)μ0(t)dt.

    Further analysis implies that

    E(T)1k2(G1(0)+2β0T0ˆξ(t)μ0(t)dtT0ξ0(t)dt),

    From the linearity of D, the linearity of the functions D2,D2 and D4 can easily be determined. This implies that

    E(T)λ1D12(H1(0)k2+2β0k2T0ˆζ(t)μ0(t)dtT0ζ0(t)dt), (3.50)

    which gives (3.29) with ς1=λ1, ς2=H1(0)k2, ς3=2β0λ2k2, and ς4(t)=Id(t)=t. Hence, the required proof is completed.

    Case 3.10. Let Hi,i=1,2 are nonlinear. Then, with the help of (3.28) and (3.40). Assume the positive functional

    H2(t)=H(t)+F1(t)

    then for all t0 and for some k3>0, the following holds true

    H2(t)k3E(t)+12Ωtg1(s)(v(t)v(ts))2dsdz+12Ωtg2(s)(w(t)w(ts))2dsdz, (3.51)

    with the help of (3.4), we have

    k3t0E(x)dxH2(0)H2(t)+β0t0μ0(ς)dςH2(0)+β0t0μ0(ς)dς. (3.52)

    Therefore

    t0E(x)dxk6μ1(t), (3.53)

    where k6=max{H2(0)k3,β0k3} and μ1(t)=1+t0μ0(ς)dς.

    Corollary 3.11. The following is obtained from (2.11) and (3.53):

    t0Ω|v(t)v(ts)|2dzds+t0Ω|w(t)w(ts)|2dzds2t0Ωv2(t)v2(ts)dzds+2t0Ωw2(t)w2(ts)dzds4l0t0E(t)E(ts)ds8l0t0E(x)dx8k6l0μ1(t). (3.54)

    Now, we define ϕi(t),i=1,2 by

    ϕ1(t):=B(t)t0Ω|v(t)v(ts)|2dzds,ϕ2(t):=B(t)t0Ω|w(t)w(ts)|2dzds (3.55)

    where B(t)=B0μ1(t) and 0<B0<min{1,l8k6}.

    Then, by (3.53), we have

    ϕi(t)<1,t>0,i=1,2 (3.56)

    Further, we suppose that ϕi(t)>0,t>0,i=1,2. In addition to this, we define another functional Γ1,Γ2 by

    Γ1(t):=t0g1(s)Ω|v(t)v(ts)|2dzds,Γ2(t):=t0g2(s)Ω|w(t)w(ts)|2dzds (3.57)

    Here, obviously Γi(t)cE(t),i=1,2. As Gi(0)=0,i=1,2 and Gi(t) are convex strictly on (0, ϱ], then

    Gi(λz)λGi(z),0<λ<1,z(0,ϱ],i=1,2. (3.58)

    Applying (2.3) and (3.56), we get

    Γ1(t)=1B(t)ϕ1(t)t0ϕ1(t)(g1(s))ΩB(t)|v(t)v(ts)|2dzds1B(t)ϕ1(t)t0ϕ1(t)ζ1(s)G1(g1(s))ΩB(t)|v(t)v(ts)|2dzdsζ1(t)B(t)ϕ1(t)t0G1(ϕ1(t)g1(s))ΩB(t)|v(t)v(ts)|2dzdsζ1(t)B(t)G1(1ϕ1(t)t0ϕ1(t)g1(s)ΩB(t)|v(t)v(ts)|2dzds)=ζ1(t)B(t)G1(B(t)t0g1(s)Ω|v(t)v(ts)|2dzds)=ζ1(t)B(t)¯G1(B(t)t0g1(s)Ω|v(t)v(ts)|2dzds). (3.59)
    Γ2(t)ζ2(t)B(t)¯G2(B(t)t0g2(s)Ω|w(t)w(ts)|2dzds). (3.60)

    Taking the same steps, ¯Gi,i=1,2 are C2-extension of Gi that are convex strictly and increasing strictlyon R+. From (3.59), we have the following

    t0g1(s)Ω|v(t)v(ts)|2dzds1B(t)¯G11(B(t)Γ1(t)ζ1(t))t0g2(s)Ω|w(t)w(ts)|2dzds1B(t)¯G21(B(t)Γ2(t)ζ2(t)). (3.61)

    Putting (3.61) and (3.4) into (3.40), we have

    H(t)k2E(t)+cB(t)¯G11(B(t)Γ1(t)ζ1(t))+cB(t)¯G21(B(t)Γ2(t)ζ2(t))+k6μ0(t) (3.62)

    Here, introduce K1(t) for ε0<r by

    K1(t)=D(ε0B(t)E(t)E(0))H(t)+E(t), (3.63)

    in which D=min{G1,G2} and is equivalent to E(t). Because of this E(t)0,¯Gi>0, and ¯Gi>0,i=1,2. Also applying (3.62), we obtained that

    K1(t)=ε0(E(t)B(t)E(0)+E(t)B(t)E(0))D(ε0E(t)B(t)E(0))H(t)+D(ε0E(t)B(t)E(0))H(t)+E(t)k2E(t)D(ε0B(t)E(t)E(0))+k6μ0(t)D(ε0B(t)E(t)E(0))+cB(t)¯G11(B(t)Γ1(t)ζ1(t)))D(ε0B(t)E(t)E(0))+cB(t)¯G21(B(t)Γ2(t)ζ2(t)))D(ε0B(t)E(t)E(0))+E(t) (3.64)

    According to [29], we introduce the conjugate function of ¯Gi by ¯Gi, which fulfills

    AB¯Gi(A)+¯Gi(B),i=1,2 (3.65)

    For A=D(ε0(E(t)B(t))/(E(0)))) and Bi=¯Gi1((B(t)Γi(t))/(ζi(t))),i=1,2 and applying (3.64), we have

    K1(t)k2E(t)D(ε0E(t)B(t)E(0))+k6μ0(t)D(ε0E(t)B(t)E(0))+cB(t)¯G1(D(ε0E(t)B(t)E(0)))+cB(t)B(t)Γ1(t)ζ1(t)+cB(t)¯G2(D(ε0E(t)B(t)E(0)))+cB(t)B(t)Γ2(t)ζ2(t)+E(t)k2E(t)D(ε0E(t)B(t)E(0))+k6μ0(t)D(ε0E(t)B(t)E(0))+cB(t)D(ε0E(t)B(t)E(0))(¯G1)1[D(ε0E(t)B(t)E(0))]+cB(t)D(ε0E(t)B(t)E(0))(¯G2)1[D(ε0E(t)B(t)E(0))]+cΓ1(t)ζ1(t)+cΓ2(t)ζ2(t). (3.66)

    Here, we multiply (3.66) by ζ0(t) and get

    ζ0(t)K1(t)k2ζ0(t)E(t)D(ε0E(t)B(t)E(0))+k6ζ0(t)μ0(t)D(ε0E(t)B(t)E(0))+2cζ0(t)B(t)ε0E(t)B(t)E(0)D(ε0E(t)B(t)E(0))+cΓ1(t)+cΓ2(t)k2ζ0(t)E(t)D(ε0E(t)B(t)E(0))+k6ζ0(t)μ0(t)D(ε0E(t)B(t)E(0))+2cζ0(t)B(t)ε0E(t)B(t)E(0)D(ε0E(t)B(t)E(0))cE(t) (3.67)

    where we utilized the following ε0(B(t)E(t)/E(0))<r, D=min{G1,G2} and Γi<cE(t),i=1,2, and define the functional K2(t) as

    K2(t)=ζ0(t)K1(t)+cE(t) (3.68)

    Effortlessly, one can prove that K2(t)E(t), i.e., one can find two positive constants m1 and m2 in a manner that

    m1K2(t)E(t)m2K2(t), (3.69)

    then, we have

    K2(t)β6ζ0(t)E(t)E(0)D(ε0E(t)B(t)E(0))+k6ζ0(t)μ0(t)D(ε0E(t)B(t)E(0))=β6ζ0(t)B(t)D2(E(t)B(t)E(0))+k6ζ0(t)μ0(t)D(ε0E(t)B(t)E(0)), (3.70)

    where β6=(k2E(0)2cε0) and D2(t)=tD(ε0t).

    Choosing ε0 so small such that β6>0, since D2(t)=D(ε0t)+ε0tD(ε0t). As D2(t),D2(t)>0 on (0, 1] and Gi on (0, ϱ] are strictly increasing. Applying Young's inequality (3.65) on the last term in (3.70)

    with A=D(ε0E(t)B(t)E(0)) and B=k6δμ(t), we find

    k6μ0(t)D(ε0E(t)B(t)E(0))=σB(t)(k6σB(t)μ0(t))(D(ε0E(t)B(t)E(0)))<σB(t)D3(k6σB(t)μ0(t))+σB(t)D3(D(ε0E(t)B(t)E(0)))<σB(t)D4(k6σB(t)μ0(t))+σB(t)(ε0E(t)B(t)E(0))D(ε0E(t)B(t)E(0))<σB(t)D4(k6σB(t)μ0(t))+σε0B(t)D2(ε0E(t)B(t)E(0)). (3.71)

    Here, choose σ small enough in a manner that β6σε0>0 andcombining (3.70) and (3.71), we have

    K2(t)β7ζ0(t)B(t)D2(E(t)B(t)E(0))+σζ0(t)B(t)D4(k6δB(t)μ0(t)). (3.72)

    where β7=β6σε0>0, D3(t)=tD1(t) and D4(t)=¯D3(t).

    In light of fact E<0 and B<0, then D2(E(t)B(t)E(0)) is decreasing. As a consequences of this, for 0tT, we have

    D2(E(T)B(T)E(0))<D2(E(t)B(t)E(0)). (3.73)

    In the next step, combine (3.72) with (3.73) and multiply by B(t), the following is obtained

    B(t)K2(t)+β7ζ0(t)D2(E(T)B(T)E(0))<σζ0(t)D4(k6σB(t)μ0(t)). (3.74)

    Since B<0, then for any 0<t<T

    (BK2)(t)+β7ζ0(t)D2(E(T)B(T)E(0))<σζ0(t)D4(k6σB(t)μ0(t))<σˆζ(t)D4(k6σB(t)μ0(t)). (3.75)

    Simplify (3.75) over [0,T] and apply B(0)=1, the following is obtained

    D2(E(T)B(T)E(0))T0ζ0(t)dt<K2(0)β7+σβ7T0ˆζ(t)D4(k6σB(t)μ0(t))dt. (3.76)

    Consequently, we have

    D2(E(T)B(T)E(0))<K2(0)β7+σβ7T0ˆζ(t)D4(k6σB(t)μ0(t))dtT0ζ0(t)dt. (3.77)

    As a results of this, we obtain

    (E(T)B(T)E(0))<D12(K2(0)β7+σβ7T0ˆζ(t)D4(k6σB(t)μ0(t))dtT0ζ0(t)dt). (3.78)

    As a result of this, we get

    E(T)<E(0)B(T)D12(K2(0)β7+σβ7T0ˆζ(t)D4(k6σB(t)μ0(t))dtT0ζ0(t)dt). (3.79)

    where, we have (3.29) with ς1=E(0)B(T), ς2=K2(0)β7, ς3=σβ7, and ς4(t)=k6σB(t).

    Hence, the required result is obtained 3.8.

    The purpose of this work was to study when the coupled system of nonlinear viscoelastic wave equations with distributed delay components, infinite memory and Balakrishnan-Taylor damping. Assume the kernels gi:R+R+ holds true the below

    gi(t)ζi(t)Gi(gi(t)),tR+,fori=1,2,

    in which ζi and Gi are functions. We prove the stability of the system under this highly generic assumptions on the behaviour of gi at infinity and by dropping the boundedness assumptions in the historical data. This type of problem is frequently found in some mathematical models in applied sciences. Especially in the theory of viscoelasticity. What interests us in this current work is the combination of these terms of damping, which dictates the emergence of these terms in the problem. In the next work, we will try to using the same method with same problem. But in added of other dampings.

    The researchers would like to thank the Deanship of Scientific Research, Qassim University for funding the publication of this project.

    The authors declare there is no conflicts of interest.



    [1] A. Esteva, K. Chou, S. Yeung, N. Naik, A. Madani, A. Mottaghi, et al., Deep learning-enabled medical computer vision, NPJ Digit. Med., 4 (2021), 1–9. https://doi.org/10.1038/s41746-020-00376-2 doi: 10.1038/s41746-020-00376-2
    [2] A. Bhargava, A. Bansal, Fruits and vegetables quality evaluation using computer vision: A review, J. King Saud Univ. Comput. Inf. Sci., 33 (2021), 243–257. https://doi.org/10.1016/j.jksuci.2018.06.002 doi: 10.1016/j.jksuci.2018.06.002
    [3] Z. Wang, Q. She, T. E. Ward, Generative adversarial networks in computer vision: A survey and taxonomy, ACM Comput. Surv., 54 (2021), 1–38. https://doi.org/10.1145/3439723 doi: 10.1145/3439723
    [4] Y. Gu, R. Tinn, H. Cheng, M. Lucas, N. Usuyama, X. Liu, et al., Domain-specific language model pretraining for biomedical natural language processing, ACM Trans. Comput. Healthcare, 3 (2021), 1–23. https://doi.org/10.1145/3458754 doi: 10.1145/3458754
    [5] D. H. Maulud, S. R. Zeebaree, K. Jacksi, M. A. M. Sadeeq, K. H. Sharif, State of art for semantic analysis of natural language processing, Qubahan Acad. J., 1 (2021), 21–28. https://doi.org/10.48161/qaj.v1n2a40 doi: 10.48161/qaj.v1n2a40
    [6] I. Guellil, H. Saâdane, F. Azouaou, B. Gueni, D. Nouvel, Arabic natural language processing: An overview, J. King Saud Univ. Comput. Inf. Sci., 3 (2021), 497–507. https://doi.org/10.1016/j.jksuci.2019.02.006 doi: 10.1016/j.jksuci.2019.02.006
    [7] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, preprint, arXiv: 1409.1556.
    [8] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, (2016), 770–778. https://doi.org/10.1109/cvpr.2016.90
    [9] G. Huang, Z. Liu, L. Van Der Maaten, K. Q. Weinberger, Densely connected convolutional networks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, (2017), 4700–4708. https://doi.org/10.1109/cvpr.2017.243
    [10] J. Bergstra, R. Bardenet, Y. Bengio, B. K{é}gl, Algorithms for hyper-parameter optimization, in Advances in Neural Information Processing Systems, Curran Associates, Inc., 24 (2011), 1–9.
    [11] N. Mitschke, M. Heizmann, K. H. Noffz, R. Wittmann, Gradient based evolution to optimize the structure of convolutional neural networks, in 2018 25th IEEE International Conference on Image Processing, IEEE, (2018), 3438–3442. https://doi.org/10.1109/icip.2018.8451394
    [12] L. Xie, A. Yuille, Genetic cnn, in Proceedings of the IEEE International Conference on Computer Vision, IEEE, (2017), 1379–1388. https://doi.org/10.1109/iccv.2017.154
    [13] Z. Lu, I. Whalen, Y. Dhebar, K. Deb, E. D. Goodman, W. Banzhaf, et al., Multiobjective evolutionary design of deep convolutional neural networks for image classification, IEEE Trans. Evol. Comput., 25 (2020), 277–291. https://doi.org/10.1109/tevc.2020.3024708 doi: 10.1109/tevc.2020.3024708
    [14] X. Xiao, M. Yan, S. Basodi, C. Ji, Y. Pan, Efficient hyperparameter optimization in deep learning using a variable length genetic algorithm, preprint, arXiv: 2006.12703.
    [15] K. Zhou, Y. Dong, K. Wang, W. S. Lee, B. Hooi, H. Xu, et al., Understanding and resolving performance degradation in deep graph convolutional networks, in Proceedings of the 30th ACM International Conference on Information & Knowledge Management, ACM, (2021), 2728–2737. https://doi.org/10.1145/3459637.3482488
    [16] Q. Li, Z. Han, X. M. Wu, Deeper insights into graph convolutional networks for semi-supervised learning, in Proceedings of the AAAI Conference on Artificial Intelligence, AAAI Press, 32 (2018), 1–8. https://doi.org/10.1609/aaai.v32i1.11604
    [17] K. Oono, T. Suzuki, On asymptotic behaviors of graph cnns from dynamical systems perspective, preprint, arXiv: 1905.10947.
    [18] D. Chen, Y. Lin, W. Li, P. Li, J. Zhou, X. Sun, Measuring and relieving the over-smoothing problem for graph neural networks from the topological view, in Proceedings of the AAAI Conference on Artificial Intelligence, AAAI Press, 34 (2020), 3438–3445. https://doi.org/10.1609/aaai.v34i04.5747
    [19] L. Ma, Y. Liu, G. Yu, X. Wang, H. Mo, G. G. Wang, et al., Decomposition-based multiobjective optimization for variable-length mixed-variable pareto optimization and its application in cloud service allocation, IEEE Trans. Syst. Man Cybern.: Syst., 53 (2023), 7138–7151. https://doi.org/10.1109/tsmc.2023.3295371 doi: 10.1109/tsmc.2023.3295371
    [20] L. Muwafaq, N. K. Noordin, M. Othman, A. Ismail, F. Hashim, Cloudlet based computing optimization using variable-length whale optimization and differential evolution, IEEE Access, 11 (2023), 45098–45112. https://doi.org/10.1109/access.2023.3272901 doi: 10.1109/access.2023.3272901
    [21] R. Domala, U. Singh, A survey on state-of-the-art applications of variable length chromosome (vlc) based ga, in Advances in Artificial Intelligence and Data Engineering, Springer, 1133 (2021), 615–630. https://doi.org/10.1007/978-981-15-3514-7_47
    [22] A. Maruyama, N. Shibata, Y. Murata, K. Yasumoto, M. Ito, P-tour: A personal navigation system with travel schedule planning and route guidance based on schedule, IPSJ J., 45 (2004), 2678–2687.
    [23] M. Alajlan, A. Koubaa, I. Chaari, H. Bennaceur, A. Ammar, Global path planning for mobile robots in large-scale grid environments using genetic algorithms, in 2013 International Conference on Individual and Collective Behaviors in Robotics, IEEE, (2013), 1–8. https://doi.org/10.1109/icbr.2013.6729271
    [24] J. J. Lee, D. W. Kim, An effective initialization method for genetic algorithm-based robot path planning using a directed acyclic graph, Inf. Sci., 332 (2016), 1–18. https://doi.org/10.1016/j.ins.2015.11.004 doi: 10.1016/j.ins.2015.11.004
    [25] Z. Qiongbing, D. Lixin, A new crossover mechanism for genetic algorithms with variable-length chromosomes for path optimization problems, Expert Syst. Appl., 60 (2016), 183–189. https://doi.org/10.1016/j.eswa.2016.04.005 doi: 10.1016/j.eswa.2016.04.005
    [26] Y. Sun, B. Xue, M. Zhang, G. G. Yen, Evolving deep convolutional neural networks for image classification, IEEE Trans. Evol. Comput., 24 (2019), 394–407. https://doi.org/10.1109/TEVC.2019.2916183 doi: 10.1109/TEVC.2019.2916183
    [27] M. H. Aliefa, S. Suyanto, Variable-length chromosome for optimizing the structure of recurrent neural network, in 2020 International Conference on Data Science and its Applications, IEEE, (2020), 1–5. https://doi.org/10.1109/icodsa50139.2020.9213012
    [28] A. Rawal, J. Liang, R. Miikkulainen, Discovering gated recurrent neural network architectures, in Deep Neural Evolution, Springer, (2020), 233–251. https://doi.org/10.1007/978-981-15-3685-4_9
    [29] Y. Li, I. King, Autograph: Automated graph neural network, in International Conference on Neural Information Processing, Springer, 12533 (2020), 189–201. https://doi.org/10.1007/978-3-030-63833-7_16
    [30] Y. Gong, Y. Sun, D. Peng, P. Chen, Z. Yan, K. Yang, Analyze covid-19 ct images based on evolutionary algorithm with dynamic searching space, Complex Intell. Syst., 7 (2021), 3195–3209. https://doi.org/10.1007/s40747-021-00513-8 doi: 10.1007/s40747-021-00513-8
    [31] T. Elsken, J. H. Metzen, F. Hutter, Neural architecture search: A survey, preprint, arXiv: 1808.05377.
    [32] B. Zoph, Q. V. Le, Neural architecture search with reinforcement learning, preprint, arXiv: 1611.01578.
    [33] B. Zoph, V. Vasudevan, J. Shlens, Q. V. Le, Learning transferable architectures for scalable image recognition, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, (2018), 8697–8710. https://doi.org/10.1109/cvpr.2018.00907
    [34] Z. Zhong, J. Yan, W. Wu, J. Shao, C. L. Liu, Practical block-wise neural network architecture generation, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, (2018), 2423–2432. https://doi.org/10.1109/cvpr.2018.00257
    [35] H. Jin, Q. Song, X. Hu, Auto-keras: Efficient neural architecture search with network morphism, preprint, arXiv: 1806.10282.
    [36] K. Kandasamy, W. Neiswanger, J. Schneider, B. Poczos, E. P. Xing, Neural architecture search with bayesian optimisation and optimal transport, preprint, arXiv: 1802.07191.
    [37] F. Hutter, H. H. Hoos, K. Leyton-Brown, Sequential model-based optimization for general algorithm configuration, in Learning and Intelligent Optimization, Springer, 6683 (2021), 507–523. https://doi.org/10.1007/978-3-642-25566-3_40
    [38] Y. Sun, B. Xue, M. Zhang, G. G. Yen, An experimental study on hyper-parameter optimization for stacked auto-encoders, in 2018 IEEE Congress on Evolutionary Computation, IEEE, (2018), 1–8. https://doi.org/10.1109/cec.2018.8477921
    [39] Y. Du, Y. Fan, X. Liu, Y. Luo, J. Tang, P. Liu, et al., Multiscale cooperative differential evolution algorithm, Comput. Intell. Neurosci., 2019 (2019), 1–18. https://doi.org/10.1155/2019/5259129 doi: 10.1155/2019/5259129
    [40] V. P. Ha, T. K. Dao, N. Y. Pham, M. H. Le, A variable-length chromosome genetic algorithm for time-based sensor network schedule optimization, Sensors, 21 (2021), 3990. https://doi.org/10.3390/s21123990 doi: 10.3390/s21123990
    [41] E. Real, A. Aggarwal, Y. Huang, Q. V. Le, Regularized evolution for image classifier architecture search, in Proceedings of the AAAI Conference on Artificial Intelligence, AAAI Press, 33 (2019), 4780–4789. https://doi.org/10.1609/aaai.v33i01.33014780
    [42] J. Snoek, O. Rippel, K. Swersky, R. Kiros, N. Satish, N. Sundaram, et al., Scalable bayesian optimization using deep neural networks, preprint, arXiv: 1502.05700.
    [43] Y. Liu, Y. Sun, B. Xue, M. Zhang, G. G. Yen, K. C. Tan, A survey on evolutionary neural architecture search, IEEE Trans. Neural Networks Learn. Syst., 34 (2021), 550–570. https://doi.org/10.1109/TNNLS.2021.3100554 doi: 10.1109/TNNLS.2021.3100554
    [44] Y. Wang, H. Yao, S. Zhao, Auto-encoder based dimensionality reduction, Neurocomputing, 184 (2016), 232–242. https://doi.org/10.1016/j.neucom.2015.08.104 doi: 10.1016/j.neucom.2015.08.104
    [45] M. Sakurada, T. Yairi, Anomaly detection using autoencoders with nonlinear dimensionality reduction, in Proceedings of the MLSDA 2014 2nd Workshop on Machine Learning for Sensory Data Analysis, ACM, (2014), 4–11. https://doi.org/10.1145/2689746.2689747
    [46] W. Wang, Y. Huang, Y. Wang, L. Wang, Generalized autoencoder: A neural network framework for dimensionality reduction, in 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, IEEE, (2014), 496–503. https://doi.org/10.1109/cvprw.2014.79
    [47] X. Lu, Y. Tsao, S. Matsuda, C. Hori, Speech enhancement based on deep denoising autoencoder, in Interspeech, ISCA, (2013), 436–440. https://doi.org/10.21437/interspeech.2013-130
    [48] H. T. Chiang, Y. Y. Hsieh, S. W. Fu, K. H. Hung, Y. Tsao, S. Y. Chien, Noise reduction in ecg signals using fully convolutional denoising autoencoders, IEEE Access, 7 (2019), 60806–60813. https://doi.org/10.1109/access.2019.2912036 doi: 10.1109/access.2019.2912036
    [49] L. Yasenko, Y. Klyatchenko, O. Tarasenko-Klyatchenko, Image noise reduction by denoising autoencoder, in 2020 IEEE 11th International Conference on Dependable Systems, Services and Technologies, IEEE, (2020), 351–355. https://doi.org/10.1109/dessert50317.2020.9125027
    [50] Z. Wan, Y. Zhang, H. He, Variational autoencoder based synthetic data generation for imbalanced learning, in 2017 IEEE Symposium Series on Computational Intelligence, IEEE, (2017), 1–7. https://doi.org/10.1109/ssci.2017.8285168
    [51] S. Semeniuta, A. Severyn, E. Barth, A hybrid convolutional variational autoencoder for text generation, preprint, arXiv: 1702.02390.
    [52] W. Xu, S. Keshmiri, G. Wang, Adversarially approximated autoencoder for image generation and manipulation, IEEE Trans. Multimedia, 21 (2019), 2387–2396. https://doi.org/10.1109/tmm.2019.2898777 doi: 10.1109/tmm.2019.2898777
    [53] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, P. A. Manzagol, L. Bottou, Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion, J. Mach. Learn. Res., 11 (2010), 3371–3408.
    [54] M. Tschannen, O. Bachem, M. Lucic, Recent advances in autoencoder-based representation learning, preprint, arXiv: 1812.05069.
    [55] S. Lauly, H. Larochelle, M. M. Khapra, B. Ravindran, V. Raykar, A. Saha, et al., An autoencoder approach to learning bilingual word representations, preprint, arXiv: 1402.1454.
    [56] I. Sutskever, O. Vinyals, Q. V. Le, Sequence to sequence learning with neural networks, in Advances in Neural Information Processing Systems, Curran Associates, Inc. 27 (2014), 1–9.
    [57] Y. A. Chung, C. C. Wu, C. H. Shen, H. Y. Lee, L. S. Lee, Audio word2vec: Unsupervised learning of audio segment representations using sequence-to-sequence autoencoder, preprint, arXiv: 1603.00982.
    [58] H. Suresh, P. Szolovits, M. Ghassemi, The use of autoencoders for discovering patient phenotypes, preprint, arXiv: 1703.07004.
    [59] Z. Chen, Y. Zhou, Z. Huang, Auto-creation of effective neural network architecture by evolutionary algorithm and resnet for image classification, in 2019 IEEE International Conference on Systems, Man and Cybernetics, IEEE, (2019), 3895–3900. https://doi.org/10.1109/smc.2019.8914267
    [60] K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, et al., Learning phrase representations using rnn encoder-decoder for statistical machine translation, preprint, arXiv: 1406.1078.
    [61] P. Koehn, Europarl: A parallel corpus for statistical machine translation, in Proceedings of Machine Translation Summit X: Papers, (2005), 79–86.
    [62] C. Moon, J. Kim, G. Choi, Y. Seo, An efficient genetic algorithm for the traveling salesman problem with precedence constraints, Eur. J. Oper. Res., 140 (2002), 606–617. https://doi.org/10.1016/s0377-2217(01)00227-2 doi: 10.1016/s0377-2217(01)00227-2
    [63] K. Chen, W. Pang, Immunetnas: An immune-network approach for searching convolutional neural network architectures, preprint, arXiv: 2002.12704.
    [64] M. Shi, D. A. Wilson, X. Zhu, Y. Huang, Y. Zhuang, J. Liu, et al., Evolutionary architecture search for graph neural networks, preprint, arXiv: 2009.10199.
    [65] A. Karpathy, Lessons learned from manually classifying cifar-10, Andrej Karpathy blog, 2011. Available from: http://karpathy.github.io/2011/04/27/manually-classifying-cifar10.
    [66] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, et al., Going deeper with convolutions, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, (2015), 1–9. https://doi.org/10.1109/cvpr.2015.7298594
    [67] K. He, X. Zhang, S. Ren, J. Sun, Identity mappings in deep residual networks, in European Conference on Computer Vision, Springer, 9908 (2016), 630–645. https://doi.org/10.1007/978-3-319-46493-0_38
    [68] S. Zagoruyko, N. Komodakis, Wide residual networks, preprint, arXiv: 1605.07146.
    [69] T. Desell, A. ElSaid, A. G. Ororbia, An empirical exploration of deep recurrent connections using neuro-evolution, in International Conference on the Applications of Evolutionary Computation, Springer, 12104 (2020), 546–561. https://doi.org/10.1007/978-3-030-43722-0_35
    [70] A. Krizhevsky, Learning Multiple Layers of Features from Tiny Images, 2009.
    [71] M. P. Marcus, B. Santorini, M. A. Marcinkiewic, Building a large annotated corpus of english: The penn treebank, Tech. Rep. (CIS), 1993 (1993), 237.
    [72] S. Merity, C. Xiong, J. Bradbury, R. Socher, Pointer sentinel mixture models, preprint, arXiv: 1609.07843.
    [73] A. K. McCallum, K. Nigam, J. Rennie, K. Seymore, Automating the construction of internet portals with machine learning, Inf. Retr., 3 (2000), 127–163. https://doi.org/10.1023/A:1009953814988 doi: 10.1023/A:1009953814988
    [74] J. Wei, Y. Tay, Q. V. Le, Inverse scaling can become u-shaped, preprint, arXiv: 2211.02011.
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1337) PDF downloads(76) Cited by(1)

Figures and Tables

Figures(7)  /  Tables(9)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog