Processing math: 45%
Research article Special Issues

An adaptive offloading framework for license plate detection in collaborative edge and cloud computing


  • With the explosive growth of edge computing, huge amounts of data are being generated in billions of edge devices. It is really difficult to balance detection efficiency and detection accuracy at the same time for object detection on multiple edge devices. However, there are few studies to investigate and improve the collaboration between cloud computing and edge computing considering realistic challenges, such as limited computation capacities, network congestion and long latency. To tackle these challenges, we propose a new multi-model license plate detection hybrid methodology with the tradeoff between efficiency and accuracy to process the tasks of license plate detection at the edge nodes and the cloud server. We also design a new probability-based offloading initialization algorithm that not only obtains reasonable initial solutions but also facilitates the accuracy of license plate detection. In addition, we introduce an adaptive offloading framework by gravitational genetic searching algorithm (GGSA), which can comprehensively consider influential factors such as license plate detection time, queuing time, energy consumption, image quality, and accuracy. GGSA is helpful for Quality-of-Service (QoS) enhancement. Extensive experiments show that our proposed GGSA offloading framework exhibits good performance in collaborative edge and cloud computing of license plate detection compared with other methods. It demonstrate that when compared with traditional all tasks are executed on the cloud server (AC), the offloading effect of GGSA can be improved by 50.31%. Besides, the offloading framework has strong portability when making real-time offloading decisions.

    Citation: Hong Zhang, Penghai Wang, Shouhua Zhang, Zihan Wu. An adaptive offloading framework for license plate detection in collaborative edge and cloud computing[J]. Mathematical Biosciences and Engineering, 2023, 20(2): 2793-2814. doi: 10.3934/mbe.2023131

    Related Papers:

    [1] Asghar Ahmadkhanlu, Hojjat Afshari, Jehad Alzabut . A new fixed point approach for solutions of a p-Laplacian fractional q-difference boundary value problem with an integral boundary condition. AIMS Mathematics, 2024, 9(9): 23770-23785. doi: 10.3934/math.20241155
    [2] Djamila Chergui, Taki Eddine Oussaeif, Merad Ahcene . Existence and uniqueness of solutions for nonlinear fractional differential equations depending on lower-order derivative with non-separated type integral boundary conditions. AIMS Mathematics, 2019, 4(1): 112-133. doi: 10.3934/Math.2019.1.112
    [3] Cuiying Li, Rui Wu, Ranzhuo Ma . Existence of solutions for Caputo fractional iterative equations under several boundary value conditions. AIMS Mathematics, 2023, 8(1): 317-339. doi: 10.3934/math.2023015
    [4] Bashir Ahmad, Manal Alnahdi, Sotiris K. Ntouyas, Ahmed Alsaedi . On a mixed nonlinear boundary value problem with the right Caputo fractional derivative and multipoint closed boundary conditions. AIMS Mathematics, 2023, 8(5): 11709-11726. doi: 10.3934/math.2023593
    [5] Isra Al-Shbeil, Abdelkader Benali, Houari Bouzid, Najla Aloraini . Existence of solutions for multi-point nonlinear differential system equations of fractional orders with integral boundary conditions. AIMS Mathematics, 2022, 7(10): 18142-18157. doi: 10.3934/math.2022998
    [6] Yujun Cui, Chunyu Liang, Yumei Zou . Existence and uniqueness of solutions for a class of fractional differential equation with lower-order derivative dependence. AIMS Mathematics, 2025, 10(2): 3797-3818. doi: 10.3934/math.2025176
    [7] Yitao Yang, Dehong Ji . Properties of positive solutions for a fractional boundary value problem involving fractional derivative with respect to another function. AIMS Mathematics, 2020, 5(6): 7359-7371. doi: 10.3934/math.2020471
    [8] Xiulin Hu, Lei Wang . Positive solutions to integral boundary value problems for singular delay fractional differential equations. AIMS Mathematics, 2023, 8(11): 25550-25563. doi: 10.3934/math.20231304
    [9] Xiping Liu, Mei Jia, Zhanbing Bai . Nonlocal problems of fractional systems involving left and right fractional derivatives at resonance. AIMS Mathematics, 2020, 5(4): 3331-3345. doi: 10.3934/math.2020214
    [10] Najla Alghamdi, Bashir Ahmad, Esraa Abed Alharbi, Wafa Shammakh . Investigation of multi-term delay fractional differential equations with integro-multipoint boundary conditions. AIMS Mathematics, 2024, 9(5): 12964-12981. doi: 10.3934/math.2024632
  • With the explosive growth of edge computing, huge amounts of data are being generated in billions of edge devices. It is really difficult to balance detection efficiency and detection accuracy at the same time for object detection on multiple edge devices. However, there are few studies to investigate and improve the collaboration between cloud computing and edge computing considering realistic challenges, such as limited computation capacities, network congestion and long latency. To tackle these challenges, we propose a new multi-model license plate detection hybrid methodology with the tradeoff between efficiency and accuracy to process the tasks of license plate detection at the edge nodes and the cloud server. We also design a new probability-based offloading initialization algorithm that not only obtains reasonable initial solutions but also facilitates the accuracy of license plate detection. In addition, we introduce an adaptive offloading framework by gravitational genetic searching algorithm (GGSA), which can comprehensively consider influential factors such as license plate detection time, queuing time, energy consumption, image quality, and accuracy. GGSA is helpful for Quality-of-Service (QoS) enhancement. Extensive experiments show that our proposed GGSA offloading framework exhibits good performance in collaborative edge and cloud computing of license plate detection compared with other methods. It demonstrate that when compared with traditional all tasks are executed on the cloud server (AC), the offloading effect of GGSA can be improved by 50.31%. Besides, the offloading framework has strong portability when making real-time offloading decisions.



    In this research, we mainly focused on wave equation to study and examine the coupled system. In this system, we assumed a bounded domain ΩRN where Ω indicates sufficiently smooth boundary of ΩRN and take the positive constants ξ0,ξ1,σ,β1,β3 where m1 for N=1,2, and 1<mN+2N2 for N3. The coupled system with these terms is given by

    {vtt(ξ0+ξ1v22+δ(v,vt)L2(Ω))Δv(t)+0g1(s)Δv(ts)ds+β1|vt(t)|m2vt(t)+τ2τ1|β2(r)||vt(tr)|m2vt(tr)dr+f1(v,w)=0.wtt(ξ0+ξ1w22+δ(w,wt)L2(Ω))Δw(t)+0g2(s)Δw(ts)ds+β3|wt(t)|m2wt(t)+τ2τ1|β4(r)||wt(tr)|m2wt(tr)dr+f2(v,w)=0.v(z,t)=v0(z),vt(z,0)=v1(z),w(z,t)=w0(z),wt(z,0)=w1(z),inΩvt(z,t)=j0(z,t),wt(z,t)=ϱ0(z,t),inΩ×(0,τ2)v(z,t)=w(z,t)=0,inΩ×(0,) (1.1)

    in which G=Ω×(τ1,τ2)×(0,) and τ1<τ2 are taken to be non-negative constants in a manner that β2, β4:[τ1,τ2]R indicates distributive time delay while gi, i=1,2 are positive.

    The viscoelastic damping term, whose kernel is the function g, is a physical term used to describe the link between the strain and stress histories in a beam that was inspired by the Boltzmann theory. There are several publications that discuss this subject and produce a lot of fresh and original findings [1,2,3,4,5], particularly the hypotheses regarding the initial condition [6,7,8,9,10,11,12] and the kernel. See [13,14,15,16,17]. As it concerns to the plate equation and the span problem, Balakrishnan and Taylor introduced a novel damping model in [18] that they dubbed the Balakrishnan-Taylor damping. Here are a few studies that specifically addressed the research of this dampening for further information [18,19,20,21,22,23].

    Several applications and real-world issues are frequently affected by the delay, which transforms numerous systems into interesting research topics. Numerous writers have recently studied the stability of the evolution systems with time delays, particularly the effect of distributed delay. See [24,25,26].

    In [1], the authors presented the stability result of the system over a considerably broader class of kernels in the absence of delay and Balakrishnan-Taylor damping ξ0=1,ξ1=δ=βi=0,i=1,,4.

    Based on everything said above, one specific problem may be solved by combining these damping terms (distributed delay terms, Balakrishnan-Taylor damping and infinite memory), especially when the past history and the distributed delay

    τ2τ1|βi(r)||ut(tr)|m2ut(tr)dr,    i=2,4

    are added. We shall attempt to throw light on it since we think it represents a fresh topic that merits investigation and analysis in contrast to the ones mentioned before. Our study is structured into multiple sections: in the second section, we establish the assumptions, notions, and lemmas we require; in the final section, we substantiate our major finding.

    In this section of the paper, we will introduce some basic results related to the theory for the analysis of our problem. Let us take the below:

    (G1) hi:R+R+ are a non-increasing C1 functions fulfills the following

    gi(0)>0,,ξ00hi(s)ds=li>0,i=1,2, (2.1)

    and

    g0=0h1(s)ds,ˆg0=0g2(s)ds,

    (G2) One can find a function C1 functions Gi:R+R+ holds true Gi(0)=Gi(0)=0.

    The functions Gi(t) are strictly increasing and convex of class C2(R+) on (0,ϱ],rgi(0) or linear in a manner that

    gi(t)ζi(t)Gi(gi(t)),t0,fori=1,2, (2.2)

    in which ζi(t) are a C1 functions fulfilling the below

    ζi(t)>0,ζi(t)0,t0. (2.3)

    (G3) β2, β4:[τ1,τ2]R are a bounded function fulfilling the below

    τ2τ1|β2(r)|dr<β1,τ2τ1|β4(r)|dr<β3. (2.4)

    (G4) fi:R2R are C1 functions with fi(0,0)=0, and one can find a function F in a way that

    f1(c,e)=dFdc(c,e),f2(c,e)=dFde(c,e),F0,af1(c,e)+ef2(c,e)=F(c,e)0, (2.5)

    and

    dfidc(c,e)+dfide(c,e)d(1+cpi1+epi1).(c,e)R2. (2.6)

    Take the below

    (gϕ)(t):=Ω0h(r)|ϕ(t)ϕ(tr)|2drdz,

    and

    M1(t):=(ξ0+ξ1v22+δ(v(t),vt(t))L2(Ω)),M2(t):=(ξ0+ξ1w22+δ(w(t),wt(t))L2(Ω)).

    Lemma 2.1. (Sobolev-Poincare inequality [27]). Assume that 2q< for n=1,2 and 2q<2nn2 for n3. Then, one can find c=c(Ω,q)>0 in a manner that

    vqcv2,vG10(Ω).

    Moreover, choose the below as in [26]:

    x(z,ρ,r,t)=vt(z,trρ),y(z,ρ,r,t)=wt(z,trρ)

    with

    {rxt(z,ρ,r,t)+xρ(z,ρ,r,t)=0,syt(z,ρ,r,t)+yρ(z,ρ,r,t)=0x(z,0,r,t)=vt(z,t),y(z,0,r,t)=wt(z,t). (2.7)

    Take the auxiliary variable (see [28])

    ηt(z,s)=v(z,t)v(z,ts),s0,ϑt(z,s)=w(z,t)w(z,ts),s0.

    Then

    ηtt(z,s)+ηts(z,s)=vt(z,t),ϑtt(z,s)+ϑts(z,s)=wt(z,t). (2.8)

    Rewrite the problem (1.1) as follows

    {vtt(l1+ξ1v22+δ(v,vt)L2(Ω))Δv(t)+0g1(s)Δηt(s)ds+β1|vt(t)|m2vt(t)+τ2τ1|β2(s)||x(z,1,r,t)|m2x(z,1,r,t)dr+f1(v,w)=0,wtt(l2+ξ1w22+δ(w,wt)L2(Ω))Δw(t)+0g2(s)Δϑt(s)ds+β3|wt(t)|m2wt(t)+τ2τ1|β4(r)||y(z,1,r,t)|m2y(z,1,r,t)dr+f2(v,w)=0,rxt(z,ρ,r,t)+xρ(z,ρ,r,t)=0,ryt(z,ρ,r,t)+yρ(z,ρ,r,t)=0,ηtt(z,s)+ηts(z,s)=vt(z,t)ϑtt(z,s)+ϑts(z,s)=wt(z,t), (2.9)

    where

    (z,ρ,r,t)Ω×(0,1)×(τ1,τ2)×(0,).

    with

    {v(z,t)=v0(z),vt(z,0)=v1(z),w(z,t)=w0(z),wt(z,0)=w1(z),inΩx(z,ρ,r,0)=j0(z,ρr),y(z,ρ,r,0)=ϱ0(z,ρr),inΩ×(0,1)×(0,τ2)v(z,t)=ηt(z,s)=0,zΩ,t,s(0,),ηt(z,0)=0,t0,η0(z,s)=η0(s)=0,s0,w(z,t)=ϑt(z,s)=0,zΩ,t,s(0,),ϑt(z,0)=0,t0,ϑ0(z,s)=ϑ0(s)=0,s0. (2.10)

    In the upcoming Lemma, the energy functional will be introduced.

    Lemma 2.2. Let the energy functional is symbolized by E, then it is given by

    E(t)=12(vt22+wt22)+ξ14(v(t)42+w(t)42)+ΩF(v,w)dz+12(l1v(t)22+l2w(t)22)+12((g1v)(t)+(g2w)(t))+m1m10τ2τ1s(|β2(r)|x(z,ρ,r,t)mm+|β4(r)|y(z,ρ,r,t)mm)drdρ. (2.11)

    The above fulfills the below

    E(t)γ0(vt(t)mm+wt(t)mm)+12((g1v)(t)+(g2w)(t))δ4{(ddt{v(t)22})2+(ddt{w(t)22})2}0, (2.12)

    in which γ0=min{β1τ2τ1|β2(r)|dr,β3τ2τ1|β4(r)|dr}.

    Proof. To prove the result, we take the inner product of (2.9) with vt,wt and after that integrating over Ω, the following is obtained

    (vtt(t),vt(t))L2(Ω)(M3(t)Δv(t),vt(t))L2(Ω)+(0h1(s)Δηt(s)ds,vt(t))L2(Ω)+β1(|vt|m2vt,vt)L2(Ω)+τ2τ1|β2(s)|(|x(z,1,r,t)|m2x(z,1,r,t),vt(t))L2(Ω)dr+(wtt(t),wt(t))L2(Ω)(M4(t)Δw(t),wt(t))L2(Ω)+(0h2(s)Δϑt(s)ds,wt(t))L2(Ω)+β3(|wt|m2wt,wt)L2(Ω)+τ2τ1|β4(s)|(|y(z,1,r,t)|m2y(z,1,r,t),wt(t))L2(Ω)dr+(f1(v,w),vt(t))L2(Ω)+(f2(v,w),wt(t))L2(Ω)=0. (2.13)

    in which

    M3(t):=(l1+ξ1v22+δ(v(t),vt(t))L2(Ω)),M4(t):=(l2+ξ1w22+δ(w(t),wt(t))L2(Ω)).

    Using mathematical skills, the following is obtained

    (vtt(t),vt(t))L2(Ω)=12ddt(vt(t)22), (2.14)

    further simplification leads us to the following

    (M3(t)Δv(t),vt(t))L2(Ω)=((l1+ξ1v22+δ(v(t),vt(t))L2(Ω))Δv(t),vt(t))L2(Ω)=(l1+ξ1v22+δ(v(t),vt(t))L2(Ω))Ωv(t).vt(t)dz=(l1+ξ1v22+δ(v(t),vt(t))L2(Ω))ddt{Ω|v(t)|2dz}=ddt{12(l1+ξ12v22)v(t)22}+δ4ddt{v(t)22}2. (2.15)

    The following is obtained after calculation

    (0g1(s)Δηt(s)ds,vt(t))L2(Ω)=Ωvt0g1(s)ηt(s)dsdz=0g1(s)Ωvtηt(s)dzds=0g1(s)Ω(ηtt+ηts)ηt(s)dzds=0g1(s)Ωηttηt(s)dzds+Ω0g1(s)ηtsηt()ddz=12ddt(g1v)(t)12(g1v)(t). (2.16)

    In the same way, we have

    (wtt(t),wt(t))L2(Ω)=12ddt(wt(t)22),(M4(t)Δw(t),wt(t))L2(Ω)=ddt{12(l2+ξ12w22)w(t)22}+δ4ddt{w(t)22}2,(0g2(s)Δϑt(s)ds,wt(t))L2(Ω)=12ddt(g2w)(t)12(g2w)(t). (2.17)

    Now, multiplying the equation (2.9) by x|β2(r)|,y|β4(r)|, and integrating over Ω×(0,1)×(τ1,τ2) and utilizing (2.7), the below is obtained

    ddtm1mΩ10τ2τ1r|β2(r)|.|x(z,ρ,r,t)|mdrdρdz=(m1)Ω10τ2τ1|β2(r)|.|y|m1xρdrdρdz=m1mΩ10τ2τ1|β2(r)|ddρ|x(z,ρ,r,t)|mdrdρdz=m1mΩτ2τ1|β2(r)|(|x(z,0,r,t)|m|x(z,1,r,t)|m)drdz=m1m(τ2τ1|β2(r)|dr)Ω|vt(t)|mdzm1mΩτ2τ1|β2(r)|.|x(z,1,r,t)|mdrdz=m1m(τ2τ1|β2(r)|dr)vt(t)mmm1mτ2τ1|β2(r)|x(z,1,r,t)mmdr. (2.18)

    Similarly, we have

    ddtm1mΩ10τ2τ1r|β4(r)|.|y(z,ρ,r,t)|mdrdρdz=m1m(τ2τ1|β4(r)|dr)wt(t)mmm1mτ2τ1|β4(r)|y(z,1,r,t)mmdr. (2.19)

    Here, we utilize the inequalities of Young as

    τ2τ1|β2(r)|(|x(z,1,r,t)|m2x(z,1,r,t),vt(t))L2(Ω)ds1m(τ2τ1|β2(r)|dr)vt(t)mm+m1mτ2τ1|β2(r)|x(z,1,r,t)mmdr, (2.20)

    and

    τ2τ1|β4(r)|(|y(z,1,r,t)|m2y(z,1,r,t),wt(t))L2(Ω)dr1m(τ2τ1|β4(r)|dr)wt(t)mm+m1mτ2τ1|β4(r)|y(z,1,r,t)mmdr. (2.21)

    Finally, we have

    (f1(v,w),vt(t))L2(Ω)+(f2(v,w),wt(t))L2(Ω)=ddtΩF(v,w)dz. (2.22)

    Thus, after replacement of (2.14)–(2.22) into (2.13), we determined (2.11) and (2.12). As a result, we obtained that E is a non-increasing function by (2.2)–(2.5), which is required.

    Theorem 2.3. Take the function U=(v,vt,w,wt,x,y,ηt,ϑt)T and assume that (2.1)–(2.5) holds true. Then, for any U0H, then one can find a unique solution U of problems (2.9) and (2.10) in a manner that

    UC(R+,G).

    If U0G1, then U fulfills the following

    UC1(R+,G)C(R+,G1),

    in which

    G=(G10(Ω)×L2(Ω))2×(L2(Ω,(0,1),(τ1,τ2)))2×(Lg1×Lg2).G1={UG/v,wG2G10,vt,wtG10(Ω),x,y,xρ,yρL2(Ω,(0,1),(τ1,τ2)),(ηt,ϑt)Lg1×Lg2,ηt(z,0)=ϑt(z,0)=0,x(z,0,r,t)=vt,y(z,0,r,t)=wt}.

    Here, the stability of the systems (2.9) and (2.10) will be established and investigated. For which the following lemma is needed

    Lemma 3.1. Let us suppose that (2.1) and (2.2) fulfills.

    Ω(0gi(s)(v(t)v(ts))ds)2dzCκ,i(hiv)(t),i=1,2. (3.1)

    where

    Cκi:=0g2i(s)κgi(s)gi(s)dshi(t):=κgi(t)gi(t),i=1,2.

    Proof.

    Ω(0gi(s)(v(t)v(ts))ds)2dz=Ω(tgi(ts)(v(t)v(ts))ds)2dz=Ω(tgi(ts)κgi(ts)gi(ts)κgi(ts)gi(ts)(v(t)v(s))ds)2dz (3.2)

    which is obtained through Young's inequality (Eq 3.1).

    Lemma 3.2. (Jensens inequality). Let f:Ω[c,e] and h:ΩR are integrable functions in a manner that for any zΩ, h(z)>0 and Ωh(z)dz=k>0. Furthermore, assume a convex function G such that G:[c,e]R. Then

    G(1kΩf(z)h(z)dz)<1kΩG(f(z))h(z)dz. (3.3)

    Lemma 3.3. It is mentioned in [12] that one can find a positive constant β, ˆβ in a manner that

    I1(t)=Ωtg1(s)|ηt(δ)|2dsdzβμ(t),I2(t)=Ωtg2(s)|ϑt(δ)|2dsdzˆβˆμ(t), (3.4)

    in which

    μ(t)=0g1(t+s)(1+Ωv20(z,s)dz)ds,ˆμ(t)=0g2(t+s)(1+Ωw20(z,s)dz)ds.

    Proof. As the function E(t) is decreasing and utilizing (2.11), we have the following

    Ω|ηt(s)|2dz=Ω(v(z,t)v(z,ts)2dz2Ωv2(z,t)dz+2Ωv2(z,ts)dz2sups>0Ωv2(z,s)dz+2Ωv2(z,tx)dz4E(0)l1+2Ωv2(z,ts)dz, (3.5)

    for any t,s0. Further, we have

    I1(t)4E(0)l1tg1(s)ds+2tg1(s)Ωv2(z,ts)dzds4E(0)l10g1(t+s)ds+20g1(t+s)Ωv20(z,s)dzdsβμ(t), (3.6)

    in which β=max{4E(0)l1,2} and μ(t)=0g1(t+s)(1+Ωu20(z,s)dz)ds.

    In the same way, we can deduce that

    I2(t)4E(0)l20g2(t+s)ds+20g2(t+s)Ωw20(z,s)dzdsˆβˆμ(t), (3.7)

    in which ˆβ=max{4E(0)l2,2} and ˆμ(t)=0g2(t+s)(1+Ωw20(z,s)dz)ds. In the upcoming part, we set the following

    Ψ(t):=Ω(v(t)vt(t)+w(t)wt(t))dz+δ4(v(t)42+w(t)42), (3.8)

    and

    Φ(t):=Ωvt0g1(s)(v(t)v(ts))dsdzΩwt0g2(s)(w(t)w(ts))dsdz, (3.9)

    and

    Θ(t):=10τ2τ1reρr(|β2(r)|.x(z,ρ,r,t)mm+|β4(r)|.y(z,ρ,r,t)mm)drdρ. (3.10)

    Lemma 3.4. In (3.8), the functional Ψ(t) fulfills the following

    Ψ(t)vt22+wt22(lε(c1+c2)σ1)(v22+w22)ξ1(v42+w42)+c(ε)(vtmm+wtmm)+c(σ1)(Cκ,1(g1v)(t)+Cκ,2(h2w)(t))ΩF(v,w)dz+c(ε)τ2τ1(|β2(r)x(z,1,r,t)mm+|β4(r)y(z,1,r,t)mm)dr. (3.11)

    for any ε,σ1>0 with l=min{l1,l2}.

    Proof. To prove the result, differentiate (3.8) first and then apply (2.9), we have the following

    Ψ(t)=vt22+Ωvttvdz+δv22Ωvtvdz+wt22+Ωwttwdz+δw22Ωwtwdz=vt22+wt22ξ0(v22+w22)ξ1(v42+w42)β1Ω|vt|m2vtvdzI11β3Ω|wt|m2wtwdzI12+Ωv(t)0g1(s)v(ts)dsdzI21+Ωw(t)0g2(s)w(ts)dsdzI22Ωτ2τ1|β2(r)||x(z,1,r,t)|m2x(z,1,r,t)vdrdzI31Ωτ2τ1|β4(r)||y(z,1,r,t)|m2y(z,1,r,t)wdrdzI32Ω(vf1(v,w)+wf2(v,w))dzI4. (3.12)

    We estimate the last 6 terms of the RHS of (3.12). The following is obtained by applying Young's, Sobolev-Poincare and Hölder's inequalities on (2.1) and (2.11), we have

    I11εβm1vmm+c(ε)vtmmεβm1cmpvm2+c(ε)vtmmεβm1cmp(E(0)l1)(m2)/2v22+c(ε)vtmmεc11v22+c(ε)vtmm. (3.13)

    In addition to this, for any σ1>0, by Lemma 3.1, we have the below

    I21(0g1(s)ds)v22Ωv(t)0g1(s)(v(t)v(ts))dsdz(ξ0l1+σ1)v22+cσ1Cκ,1(h1v)(t). (3.14)

    Taking same steps to I12, the below is obtained

    I31εc21v22+c(ε)τ2τ1|β2(r)|.x(z,1,r,t)mmdr. (3.15)

    Same steps for I11,I21 and I31, we have

    I12εc12w22+c(ε)wtmmI22(ξ0l2+σ1)w22+cσ1Cκ,2(h2w)(t),I32εc22w22+c(ε)τ2τ1|β4(r)|.y(z,1,r,t)mmdr. (3.16)

    Combining (3.13)–(3.21), (3.12) and (2.5), the required (3.11) is obtained.

    Lemma 3.5. For any σ,σ2,σ3>0, the functional Φ(t) introduced in (3.9) holds true

    Φ(t)(l0σ3)(vt22+wt22)+ξ1σ(v42+w42)+σ(ξ0+^l02+cˆl)v22+σ(ξ0+ˆh20+cl2)w22+σ22δE(0)(1l1(12ddtv22)2+1l2(12ddtw22)2)+c(σ,σ2,σ3)(Cκ,1(h1v)(t)+Cκ,2(h2w)(t))+c(σ)(vtmm+τ2τ1|β2(r)x(z,1,r,t)mmdr)+c(σ)(wtmm+τ2τ1|β4(r)y(z,1,r,t)mmdr). (3.17)

    where ˆl=max{l1,l2}, l0=min{g0,ˆg0} and ^l0=max{g0,ˆg0}.

    Proof. To prove the result, simplification of (3.9) and (2.9) through mathematical skills leads us to the following

    Φ(t)=Ωvtt0g1(s)(v(t)v(ts))dsdzΩvtt(0g1(s)(v(t)v(ts))ds)dzΩwtt0g2(s)(w(t)w(ts))dsdzΩwtt(0g2(s)(w(t)w(ts))ds)dz=(ξ0+ξ1v22)Ωv0g1(s)(v(t)v(ts))dsdzJ11+(ξ0+ξ1w22)Ωw0g2(s)(w(t)w(ts))dsdzJ12+δΩvvtdz.Ωv0g1(s)(v(t)v(ts))dsdzJ21+δΩwwtdz.Ωw0g2(s)(w(t)w(ts))dsdzJ22Ω(0g1(s)v(ts)ds).(0g1(s)(v(t)v(ts))ds)dzJ31Ω(0g2(s)w(ts)ds).(0g2(s)(w(t)w(ts))ds)dzJ32β1Ω|vt|m2vt(0g1(s)(v(t)v(ts))ds)dzJ41β3Ω|wt|m2wt(0g2(s)(w(t)w(ts))ds)dxJ42Ωτ2τ1|β2(r)||x(z,1,r,t)|m2x(z,1,r,t)×0g1(s)(v(t)v(ts))ds)dsdzJ51Ωτ2τ1|β4(r)||y(z,1,r,t)|m2y(z,1,r,t)×0g2(s)(w(t)w(ts))ds)dsdzJ51Ωvtt(0g(s)(v(t)v(ts))ds)dzJ61Ωwtt(0g2(s)(w(t)w(ts))ds)dzJ62Ωf1(v,w).(0g1(s)(v(t)v(ts))ds)dzJ71Ωf2(v,w).(0g2(s)(w(t)w(ts))ds)dzJ72. (3.18)

    Here, we will find our the approximation of the terms of the RHS of (3.18). Using the well-known Young's, Sobolev-Poincare and Hölder's inequalities on (2.1), (2.11) and Lemma 3.1, we proceed as follows

    |J11|(ξ0+ξ1v22)(σv22+14σCκ,1(h1v)(t))σξ0v22+σξ1v42+(ξ04σ+ξ1E(0)4l1ξ)Cκ,1(h1v)(t), (3.19)

    and

    J21σ2δ(Ωvvtdz)2v22+δ4σ2Cκ,1(h1v)(t)σ22δE(0)l1(12ddtv22)2+δ4σ2Cκ,1(h1v)(t), (3.20)
    |J31|Ω(0g1(s)v(t)ds)(0g1(s)(v(ts)v(t))ds)dzΩ(0g1(s)(v(t)v(ts))ds)2dzδg20v22+(1+14δ)Cκ,1(h1v)(t), (3.21)
    |J41|c(σ)vtmm+σβm1Ω(0g1(s)(v(t)v(ts))ds)mdzc(σ)vtmm+σ(βm1cmp[4g0E(0)l1](m2))Cκ,1(h1v)(t)c(σ)vtmm+σc3Cκ,1(h1v)(t). (3.22)

    In the same, we obtained the following

    J51c(σ)x(z,1,r,t)mm+σc4Cκ,1(h1v)(t), (3.23)

    and to find the approximation of J61, we have

    t(0g1(s)(v(t)v(ts))ds)=t(tg1(ts)(v(t)v(s))ds)=tg1(ts)(v(t)v(s))ds+(tg1(ts)ds)vt(t)=0g1(s)(v(t)v(ts))ds+g0vt(t),

    the (2.2) implies that

    J61(g0σ3)vt22+cσ3Cκ,1(h1v)(t). (3.24)

    In the same steps, the estimation of Ji2, i=1,..,6 are obtained and

    J71cσl1v22+c(σ)Cκ,1(h1v)(t)J72cσl2w22+c(σ)Cκ,2(h2v)(t). (3.25)

    Here, put (3.19)–(3.25) into (3.18), the required result is obtained.

    Lemma 3.6. The functional Θ(t) introduced in (3.10) fulfills the below

    Θ(t)γ110τ2τ1r(|β2(r)|.x(z,ρ,r,t)mm+|β4(r)|.y(z,ρ,r,t)mm)drdργ1τ2τ1(|β2(s)|.x(z,1,r,t)mm+|β4(r)|.y(z,1,r,t)mm)dr+β5(vt(t)mm+wt(t)mm). (3.26)

    in which \beta_{5} = \max\{\beta_{1}, \beta_{3}\} .

    Proof. To prove the result, using \Theta(t) , and (2.9) , we obtained the following

    \begin{eqnarray*} \Theta'( t) & = &-m\int_{\Omega}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}} e^{-r\rho} \vert \beta_{2}(r)\vert.\vert x\vert^{m-1} x_{\rho}\left( z, \rho, r, t\right) dr d\rho dz \\ &&-m\int_{\Omega}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}} e^{-r\rho} \vert \beta_{4}(r)\vert.\vert y\vert^{m-1} y_{\rho}\left( z, \rho, r, t\right) dr d\rho dz \\ & = &-\int_{\Omega}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}} r e^{-r\rho} \vert \beta_{2}(r)\vert.\vert x(z, \rho, r, t)\vert^{m} dr d\rho dz \\ &&-\int_{\Omega}\int_{\tau_{1}}^{\tau_{2}} \vert \beta_{2}(r)\vert\bigg[e^{-r} \vert x\left( z, 1, r, t\right)\vert^{m}-\vert x\left( z, 0, r, t\right)\vert^{m}\bigg] dr dz \\ &&-\int_{\Omega}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}} r e^{-r\rho} \vert \beta_{4}(r)\vert.\vert y(z, \rho, r, t)\vert^{m} dr d\rho dz \\ &&-\int_{\Omega}\int_{\tau_{1}}^{\tau_{2}} \vert \beta_{4}(r)\vert\bigg[e^{-r} \vert y\left( z, 1, r, t\right)\vert^{m}-\vert y\left( z, 0, r, t\right)\vert^{m}\bigg] dr dz \end{eqnarray*}

    Utilizing x(z, 0, r, t) = v_{t}(z, t), y(z, 0, r, t) = w_{t}(z, t) , and e^{-r}\leq e^{-r\rho}\leq 1 , for any 0 < \rho < 1 , moreover, select \gamma_{1} = e^{-\tau_{2}} , we have

    \begin{eqnarray*} \Theta'( t) &\leq &-\gamma_{1}\int_{\Omega}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}}r\bigg(\vert \beta_{2}(r)\vert.\vert z(z, \rho, r, t)\vert^{m}+\vert \beta_{4}(r)\vert.\vert y(z, \rho, r, t)\vert^{m}\bigg) dr d\rho dz \\ &&-\gamma_{1}\int_{\Omega}\int_{\tau_{1}}^{\tau_{2}} \bigg(\vert \beta_{2}(r)\vert \vert x(z, 1, r, t)\vert^{m}+\vert \beta_{4}(r)\vert \vert y(z, 1, r, t)\vert^{m}\bigg) dr dz\\ && +\int_{\tau_{1}}^{\tau_{2}} \vert \beta_{2}(r)\vert dr\int_{\Omega}\vert v_{t}\vert^{m}(t)dz+\int_{\tau_{1}}^{\tau_{2}} \vert \beta_{4}(r)\vert dr\int_{\Omega}\vert w_{t}\vert^{m}(t)dz, \end{eqnarray*}

    applying (2.4), the required proof is obtained. In the next step, we below functional are introduced

    \begin{eqnarray} \mathcal{A}_{1}(t)&: = &\int_{\Omega}\int_{0}^{t}\varphi_{1}(t-s)\nabla v(s)^{2}ds dz, \\ \mathcal{A}_{2}(t)&: = &\int_{\Omega}\int_{0}^{t}\varphi_{2}(t-s)\nabla w(s)^{2}ds dz, \end{eqnarray} (3.27)

    in which \varphi_{1}(t) = \int_{t}^{\infty}g_{1}(s)ds, \varphi_{2}(t) = \int_{t}^{\infty}g_{2}(s)ds .

    Lemma 3.7. Let us suppose that (2.1) and (2.2) satisfied. Then, the functional F_{1} = \mathcal{A}_{1}+\mathcal{A}_{2} and fulfills the following

    \begin{eqnarray} F_{1}'(t)&\leq&-\frac{1}{2}\bigg((g_{1}\circ\nabla v)(t)+(g_{2}\circ\nabla w)(t)\bigg)\\ &&+3g_{0}\int_{\Omega}\nabla v^{2}dz+3\widehat{g}_{0}\int_{\Omega}\nabla w^{2}dz\\ &&+\frac{1}{2}\int_{\Omega}\int_{t}^{\infty}g_{1}(s)(\nabla v(t)-\nabla v(t-s))^{2}ds dz\\ &&+\frac{1}{2}\int_{\Omega}\int_{t}^{\infty}g_{2}(s)(\nabla w(t)-\nabla w(t-s))^{2}ds dz. \end{eqnarray} (3.28)

    Proof. We can easily prove this lemma with the help of Lemma 3.7 in [13] and Lemma 3.4 in [15].

    Now, we have sufficient mathematical tools to prove the below mentioned Theorem.

    Theorem 3.8. Take (2.1)–(2.5), then one can find positive constants \varsigma_{i}, i = 1, 2, 3 and positive function \varsigma_{4}(t) in a way that the energy functionalmentioned in (2.11) fulfills

    \begin{equation} E\left( t\right) \leq \varsigma_{1}D_{2}^{-1} \bigg(\frac{\varsigma_{2}+\varsigma_{3}\int_{0}^{t}\widehat{\zeta}(\nu)D_{4}(\varsigma_{4}(\nu)\mu_{0}(\nu))d\nu}{\int_{0}^{t}\zeta_{0}(\nu)d\nu}\bigg), \end{equation} (3.29)

    in which

    \begin{equation} D_{2}(t) = tD'(\varepsilon_{0}t), \quad D_{3}(t) = tD'^{-1}(t), \quad D_{4}(t) = \overline{D}^{*}_{3}(t), \quad \end{equation} (3.30)

    and

    \mu_{0} = \max\{\mu, \widehat{\mu}\}, \quad \widehat{\zeta} = \max\{\zeta_{1}, \zeta_{2}\}, \quad \zeta_{0} = \min\{\zeta_{1}, \zeta_{2}\},

    which are increasing and convex in (0 , \varrho] .

    Proof. For the proof, we define the below functional

    \begin{eqnarray} \mathcal{G}(t)&: = & NE(t)+N_{1}\Psi(t)+N_{2}\Phi(t)+N_{3}\Theta(t), \end{eqnarray} (3.31)

    we determined the positive constants N, N_{i}, i = 1, 2, 3 . Simplifying (3.36) and utilizing 2.12, the Lemmas 3.4–3.6, we have

    \begin{eqnarray} \mathcal{G}'(t)&: = &NE'(t)+N_{1}\Psi'(t)+N_{2}\Phi'(t) +N_{3}\Theta'(t)\\ &\leq&-\bigg\{N_{2}(l_{0}-\sigma_{3})-N_{1}\bigg\}\bigg(\Vert v_{t}\Vert_{2}^{2}+\Vert w_{t}\Vert_{2}^{2}\bigg)\\ &&-\bigg\{N_{3}\xi_{1}-N_{2}\xi_{1}\sigma\bigg\}\bigg(\Vert\nabla v\Vert_{2}^{4}+\Vert\nabla w\Vert_{2}^{4}\bigg)\\ &&-\bigg\{N_{1}(l-\varepsilon(c_{1}+c_{2})-\sigma_{1})-N_{2}\sigma(\xi{0}+\widehat{l_{0}}^{2}+c\widehat{l})\bigg\}\bigg(\Vert\nabla v\Vert_{2}^{2}+\Vert\nabla w\Vert_{2}^{2}\bigg)\\ &&-\bigg\{\frac{N\delta}{4}-N_{2}\sigma_{2}\frac{2\delta E(0)}{l}\bigg\}\bigg[\bigg(\frac{1}{2}\frac{d}{dt}\Vert\nabla v\Vert_{2}^{2}\bigg)^{2}+\bigg(\frac{1}{2}\frac{d}{dt}\Vert\nabla w\Vert_{2}^{2}\bigg)^{2}\bigg]\\ &&+\bigg\{N_{1}c(\sigma_{1})+N_{2}c(\sigma, \sigma_{2}, \sigma_{3})\bigg\}\bigg(C_{\kappa, 1}(h_{1}\circ\nabla v)(t)+C_{\kappa, 2}(h_{2}\circ\nabla w)(t)\bigg)\\ &&+\frac{N}{2}\bigg( (g_{1}'\circ\nabla v)(t)+(g_{2}'\circ\nabla w)(t)\bigg)\\ &&-\bigg\{\gamma_{0}N-N_{1}c(\varepsilon)-N_{2}c(\sigma)-N_{3}\beta_{5}\bigg\}\bigg(\Vert v_{t}\Vert_{m}^{m}+\Vert w_{t}\Vert_{m}^{m}\bigg)\\ &&-\bigg(\gamma_{1}N_{3}- N_{1}c(\varepsilon)-N_{2}c(\sigma)\bigg) \int_{\tau_{1}}^{\tau_{2}}\vert\beta_{2}(r)\Vert x(z, 1, r, t)\Vert_{m}^{m}ds\bigg)\\ &&-N_{3}\gamma_{1}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}} r\vert \beta_{2}(r)\vert.\Vert x(z, \rho, r, t)\Vert_{m}^{m} dr d\rho\\ &&-\bigg(\gamma_{1}N_{3}- N_{1}c(\varepsilon)-N_{2}c(\sigma)\bigg) \int_{\tau_{1}}^{\tau_{2}}\vert\beta_{4}(r)\Vert y(z, 1, r, t)\Vert_{m}^{m}dr \bigg)\\ &&-N_{3}\gamma_{1}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}} r \vert \beta_{4}(r)\vert.\Vert y(z, \rho, r, t)\Vert_{m}^{m} dr d\rho-N_{1}\int_{\Omega}F(v, w)dz . \end{eqnarray} (3.32)

    We select the various constants at this point such that the values included in parenthesis are positive in this stage. Here, putting

    \sigma_{3} = \frac{l_{0}}{2}, \quad \varepsilon = \frac{l}{4(c_{1}+c_{2})}, \quad \sigma_{1} = \frac{l}{4}, \quad \sigma_{2} = \frac{lN}{16 E(0)N_{2}}, \quad N_{1} = \frac{l_{0}}{4}N_{2}.

    Thus, we arrive at

    \begin{eqnarray} \mathcal{H}'(t)&\leq&-\frac{l_{0}}{4} N_{2}\bigg(\Vert w_{t}\Vert_{2}^{2}+\Vert w_{t}\Vert_{2}^{2}\bigg)-\zeta_{1}N_{2}\bigg(\frac{l_{0}}{4}-\delta\bigg)\bigg(\Vert\nabla w\Vert_{2}^{4}+\Vert\nabla u\Vert_{2}^{4}\bigg)\\ &&-N_{2}\bigg(\frac{ll_{0}}{8}-\delta(\zeta_{0}+\widehat{h_{0}}^{2}+c\widehat{l})\bigg)\bigg(\Vert\nabla w\Vert_{2}^{2}+\Vert\nabla u\Vert_{2}^{2}\bigg)\\ &&-\frac{N\delta}{8}\bigg[\bigg(\frac{1}{2}\frac{d}{dt}\Vert\nabla v\Vert_{2}^{2}\bigg)^{2}+\bigg(\frac{1}{2}\frac{d}{dt}\Vert\nabla w\Vert_{2}^{2}\bigg)^{2}\bigg]\\ &&+N_{2}c(\sigma, \sigma_{1}, \sigma_{2}, \sigma_{3})\bigg(C_{\kappa, 1}(h_{1}\circ\nabla v)(t)+C_{\kappa, 2}(h_{2}\circ\nabla w)(t)\bigg)\\ &&+\frac{N}{2}\bigg( (g_{1}'\circ\nabla v)(t)+(g_{2}'\circ\nabla v)(t)\bigg)-N_{1}\int_{\Omega}F(v, w)dz\\ &&-\bigg(\gamma_{0}N-N_{2}c(\sigma, \varepsilon)-N_{3}\beta_{5}\bigg)\bigg(\Vert v_{t}\Vert_{m}^{m}+\Vert w_{t}\Vert_{m}^{m}\bigg)\\ &&-\bigg(\gamma_{1}N_{3}-N_{2}c(\sigma, \varepsilon)\bigg) \int_{\tau_{1}}^{\tau_{2}}\vert\beta_{2}(r)\Vert x(z, 1, r, t)\Vert_{m}^{m}ds\bigg)\\ &&-N_{3}\gamma_{1}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}} r\vert \beta_{2}(r)\vert.\Vert x(z, \rho, r, t)\Vert_{m}^{m} dr d\rho\\ &&-\bigg(\gamma_{1}N_{3}-N_{2}c(\sigma, \varepsilon)\bigg) \int_{\tau_{1}}^{\tau_{2}}\vert\beta_{4}(r)\Vert y(z, 1, r, t)\Vert_{m}^{m}dr\bigg)\\ &&-N_{3}\gamma_{1}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}} r\vert \beta_{4}(r)\vert.\Vert y(z, \rho, r, t)\Vert_{m}^{m} dr d\rho . \end{eqnarray} (3.33)

    In the upcoming, we select \sigma in a manner that

    \sigma < \min\bigg\{\frac{l_{0}}{4}, \frac{ll_{0}}{8(\xi_{0}+\widehat{g_{0}}^{2}+c\widehat{l})}\bigg\}.

    After that, we take N_{2} in a way that

    N_{2}\bigg(\frac{ll_{0}}{8}-\sigma(\xi_{0}+\widehat{g_{0}}^{2}+c\widehat{l})\bigg) > 4l_{0},

    and take N_{3} large enough in a way that

    \gamma_{1}N_{3}-N_{2}c(\sigma, \varepsilon) > 0.

    As a result, for positive constants d_{i}, i = 1, 2, 3, 4, 5 , (3.33) can be written as

    \begin{eqnarray} \mathcal{H}'(t)&\leq&-d_{1}(\Vert v_{t}\Vert_{2}^{2}+\Vert w_{t}\Vert_{2}^{2})-d_{2}(\Vert\nabla v\Vert_{2}^{4}+\Vert\nabla w\Vert_{2}^{4})-4l_{0}(\Vert\nabla v\Vert_{2}^{2}+\Vert\nabla w\Vert_{2}^{2})\\ &&-\frac{N\delta}{8}\bigg[\bigg(\frac{1}{2}\frac{d}{dt}\Vert\nabla v\Vert_{2}^{2}\bigg)^{2}+\bigg(\frac{1}{2}\frac{d}{dt}\Vert\nabla w\Vert_{2}^{2}\bigg)^{2}\bigg]\\ &&-\bigg(\frac{N}{2}-d_{3}C_{\kappa}\bigg)\bigg((h_{1}\circ\nabla v)(t)+(h_{2}\circ\nabla w)(t)\bigg)\\ &&+\frac{N\kappa}{2} \bigg((g_{1}\circ\nabla v)(t)+(g_{2}\circ\nabla w)(t)\bigg) \\ &&-(\gamma_{0}N-c)\bigg(\Vert v_{t}\Vert_{m}^{m}+\Vert w_{t}\Vert_{m}^{m}\bigg)-d_{5}\int_{\Omega}F(v, w)dz\\ &&-d_{4}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}} s \bigg(\vert\beta_{2}(r)\vert.\Vert x(z, \rho, r, t)\Vert_{m}^{m}+\vert\beta_{4}(r)\vert.\Vert y(z, \rho, r, t)\Vert_{m}^{m}\bigg) dr d\rho, \end{eqnarray} (3.34)

    in which C_{\kappa} = \max\{C_{\kappa, 1}, C_{\kappa, 2}\} .

    We know that \frac{\kappa g_{i}^{2}(s)}{\kappa g_{i}(s)-g_{i}(s)}\leq g_{i}(s) , then from from Lebesgue Dominated Convergence, we have the below

    \begin{equation} \lim\limits_{\kappa\rightarrow 0^{+}}\kappa C_{\kappa, i} = \lim\limits_{\kappa\rightarrow 0^{+}}\int_{0}^{\infty}\frac{\kappa g_{i}^{2}(s)}{\kappa g_{i}(s)-g_{i}(s)}ds = 0, \quad i = 1, 2 \end{equation} (3.35)

    which leads to

    \lim\limits_{\kappa\rightarrow 0^{+}}\kappa C_{\kappa} = 0.

    As a result of this, one can find 0 < \kappa_{0} < 1 in a manner that if \kappa < \kappa_{0} , then

    \begin{equation} \kappa C_{\kappa}\leq \frac{1}{d_{3}}. \end{equation} (3.36)

    From (3.8)–(3.10) through mathematical skills, we have the following

    \begin{eqnarray} \vert\mathcal{H}(t)-NE(t)\vert&\leq&\frac{N_{1}}{2}\bigg(\Vert v_{t}(t)\Vert_{2}^{2}+\Vert w_{t}(t)\Vert_{2}^{2}+c_{p}\Vert\nabla w(t)\Vert_{2}^{2}+c_{p}\Vert\nabla w(t)\Vert_{2}^{2}\bigg)\\ &&+\delta\frac{N_{1}}{4}\bigg(\Vert\nabla v(t)\Vert_{2}^{4}+\Vert\nabla w(t)\Vert_{2}^{4}\bigg)+\frac{N_{2}}{2}\bigg(\Vert v_{t}(t)\Vert_{2}^{2}+\Vert w_{t}(t)\Vert_{2}^{2}\bigg)\\ &&+\frac{N_{2}}{2}c_{p}\bigg(C_{\kappa, 1}(g_{1}\circ\nabla v)(t)+C_{\kappa, 2}(g_{2}\circ\nabla w)(t)\bigg) \\ &&+N_{3}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}}r e^{-\rho r}\bigg(\vert \beta_{2}(r)\vert. \Vert x(z, \rho, r, t)\Vert_{m}^{m}+\vert \beta_{4}(r)\vert. \Vert y(z, \rho, r, t)\Vert_{m}^{m}\bigg) dr d\rho. \end{eqnarray} (3.37)

    By the fact e^{-\rho r} < 1 and (2.2), we have the below

    \begin{eqnarray} \vert\mathcal{H}(t)-NE(t)\vert&\leq&C(N_{1}, N_{2}, N_{3})E(t) = C_{1}E(t). \end{eqnarray} (3.38)

    that is

    \begin{equation} \left( N-C_{1}\right) E\left( t\right) \leq \mathcal{H}\left( t\right) \leq \left( N+C_{1}\right) E\left( t\right). \end{equation} (3.39)

    Here, set \kappa = \frac{1}{2N} and take N large enough in a manner that

    \begin{eqnarray*} &&N-C_{1} > 0, \quad , \quad \gamma_{0}N-c > 0, \quad \frac{1}{2}N-\frac{1}{2\kappa_{0}} > 0, \quad \kappa = \frac{1}{2N} < \kappa_{0}, \end{eqnarray*}

    we find

    \begin{equation} \mathcal{H}^{\prime }\left( t\right) \leq -k_{2}E(t)+\frac{1}{4}(( g_{1}\circ\nabla v)(t)+(g_{2}\circ\nabla w)(t) ) \end{equation} (3.40)

    for some k_{2} > 0 , and

    \begin{equation} c_{5}E\left( t\right) \leq \mathcal{H}\left( t\right) \leq c_{6}E\left( t\right) , \forall t\geq 0 \end{equation} (3.41)

    for some c_{5}, c_{6} > 0 , we have

    \mathcal{H}(t)\sim E(t).

    After that, the below cases are considered:

    Case 3.9. G_{i}, i = 1, 2 are linear. Multiplying (3.40) by \zeta_{0}(t) = \min\{\zeta_{1}(t), \zeta_{2}(t)\} , we find

    \begin{eqnarray} \zeta_{0}(t)\mathcal{H}^{\prime }\left( t\right) &\leq& -k_{2}\zeta_{0}(t)E(t)+\frac{1}{4}\zeta_{0}(t)(( g_{1}\circ\nabla v)(t)+(g_{2}\circ\nabla w)(t) )\\ &\leq&-k_{2}\zeta_{0}(t)E(t)+\frac{1}{4}\zeta_{1}(t)( g_{1}\circ\nabla v)(t)+\frac{1}{4}\zeta_{2}(t)(g_{2}\circ\nabla w)(t) . \end{eqnarray} (3.42)

    The last two terms in (3.42), we have

    \begin{eqnarray} \frac{\zeta_{1}(t)}{4}(g_{1}\circ\nabla v)(t) & = & \frac{\zeta_{1}(t)}{4}\int_{\Omega}\int_{0}^{\infty}g_{1}(\delta)\vert\nabla\eta^{t}(s)\vert^{2}ds dz \\ & = &\underbrace{ \frac{\zeta_{1}(t)}{4}\int_{\Omega}\int_{0}^{t}g_{1}(s)\vert\nabla\eta^{t}(s)\vert^{2}ds dz}_{I_{1}}\\ &&+\underbrace{ \frac{\zeta_{1}(t)}{4}\int_{\Omega}\int_{t}^{\infty}g_{1}(s)\vert\nabla\eta^{t}(s)\vert^{2}ds dz}_{I_{2}} \end{eqnarray} (3.43)

    To estimate I_{1} , using (2.11),

    \begin{eqnarray} I_{1} &\leq& \frac{1}{4}\int_{\Omega}\int_{0}^{t}\zeta_{1}(s)g_{1}(s)\vert\nabla\eta^{t}(s)\vert^{2}ds dz\\ & = &-\frac{1}{4}\int_{\Omega}\int_{0}^{t}g_{1}'(s)\vert\nabla\eta^{t}(s)\vert^{2}ds dz\\ &\leq&-\frac{1}{2l_{1}}E'(t), \end{eqnarray} (3.44)

    and by (3.4), we get

    \begin{eqnarray} I_{2} &\leq& \frac{\beta}{4}\zeta_{1}(t)\mu(t). \end{eqnarray} (3.45)

    In the same way, we obtained

    \begin{eqnarray} \frac{\zeta_{2}(t)}{4}(g_{2}\circ\nabla w)(t) &\leq&-\frac{1}{2l_{2}}E'(t)+\frac{\widehat{\beta}}{4}\zeta_{2}(t)\widehat{\mu}(t). \end{eqnarray} (3.46)

    As a result of this, we get

    \begin{equation} \zeta_{0}(t)\mathcal{H}^{\prime }\left( t\right) \leq -k_{2}\zeta_{0}(t)E(t)-\frac{1}{\widehat{l}}E'(t)+2\beta_{0} w(t), \end{equation} (3.47)

    where \beta_{0} = \max\{\frac{\beta}{4}, \frac{\widehat{\beta}}{4}\} and w(t) = \widehat{\zeta}(t)\mu_{0}(t) .

    Applying \zeta_{i}'(t)\leq0 , we get

    \begin{eqnarray} \mathcal{H}_{1}^{\prime }\left( t\right)\leq-k_{2}\zeta_{0}(t)E(t)+2\beta_{0} w(t), \end{eqnarray} (3.48)

    with

    \mathcal{H}_{1}(t) = \zeta_{0}(t)\mathcal{H}\left( t\right)+\frac{1}{\widehat{l}} E(t)\sim E(t),

    we have

    \begin{equation} k_{4}E(t)\leq \mathcal{H}_{1}(t)\leq k_{5}E(t), \end{equation} (3.49)

    then, the following is obtained from (3.48)

    \begin{eqnarray*} k_{2}E(T)\int_{0}^{T}\zeta_{0}(t)dt&\leq&k_{2}\int_{0}^{T}\zeta_{0}(t)E(t)dt\notag\\ &\leq&\mathcal{H}_{1}(0)-\mathcal{H}_{1}(T)+2\beta_{0}\int_{0}^{T}w(t)dt\notag\\ &\leq&\mathcal{H}_{1}(0)+2\beta_{0}\int_{0}^{T}\widehat{\zeta}(t)\mu_{0}(t)dt. \end{eqnarray*}

    Further analysis implies that

    \begin{equation*} E(T)\leq\frac{1}{k_{2}}\bigg(\frac{\mathcal{G}_{1}(0)+2\beta_{0} \int_{0}^{T}\widehat{\xi}(t)\mu_{0}(t)dt}{\int_{0}^{T}\xi_{0}(t)dt}\bigg), \end{equation*}

    From the linearity of D , the linearity of the functions D_{2}, D'_{2} and D_{4} can easily be determined. This implies that

    \begin{equation} E(T)\leq\lambda_{1}D_{2}^{-1}\bigg(\frac{\frac{\mathcal{H}_{1}(0)}{k_{2}}+\frac{2\beta_{0}}{k_{2}} \int_{0}^{T}\widehat{\zeta}(t)\mu_{0}(t)dt}{\int_{0}^{T}\zeta_{0}(t)dt}\bigg), \end{equation} (3.50)

    which gives (3.29) with \varsigma_{1} = \lambda_{1} , \varsigma_{2} = \frac{\mathcal{H}_{1}(0)}{k_{2}} , \varsigma_{3} = \frac{2\beta_{0}}{\lambda_{2}k_{2}} , and \varsigma_{4}(t) = Id(t) = t . Hence, the required proof is completed.

    Case 3.10. Let H_{i}, i = 1, 2 are nonlinear. Then, with the help of (3.28) and (3.40). Assume the positive functional

    \begin{equation*} \mathcal{H}_{2}(t) = \mathcal{H}(t)+F_{1}(t) \end{equation*}

    then for all t\geq 0 and for some k_{3} > 0 , the following holds true

    \begin{eqnarray} \mathcal{H}'_{2}(t)&\leq& -k_{3}E(t)+\frac{1}{2}\int_{\Omega}\int_{t}^{\infty}g_{1}(s)(\nabla v(t)-\nabla v(t-s))^{2}ds dz\\ &&+\frac{1}{2}\int_{\Omega}\int_{t}^{\infty}g_{2}(s)(\nabla w(t)-\nabla w(t-s))^{2}ds dz, \end{eqnarray} (3.51)

    with the help of (3.4), we have

    \begin{eqnarray} k_{3}\int_{0}^{t}E(x)dx&\leq& \mathcal{H}_{2}(0)-\mathcal{H}_{2}(t)+\beta_{0}\int_{0}^{t}\mu_{0}(\varsigma)d\varsigma\\ &\leq&\mathcal{H}_{2}(0)+\beta_{0}\int_{0}^{t}\mu_{0}(\varsigma)d\varsigma. \end{eqnarray} (3.52)

    Therefore

    \begin{eqnarray} \int_{0}^{t}E(x)dx&\leq&k_{6}\mu_{1}(t), \end{eqnarray} (3.53)

    where k_{6} = \max\{\frac{\mathcal{H}_{2}(0)}{k_{3}}, \frac{\beta_{0}}{k_{3}}\} and \mu_{1}(t) = 1+\int_{0}^{t}\mu_{0}(\varsigma)d\varsigma .

    Corollary 3.11. The following is obtained from (2.11) and (3.53):

    \begin{eqnarray} &&\int_{0}^{t}\int_{\Omega}\vert\nabla v(t)-\nabla v(t-s)\vert^{2}dz ds \\ &&+\int_{0}^{t}\int_{\Omega}\vert\nabla w(t)-\nabla w(t-s)\vert^{2}dz ds \\ &\leq&2\int_{0}^{t}\int_{\Omega}\nabla v^{2}(t)-\nabla v^{2}(t-s)dzds\\ &&+2\int_{0}^{t}\int_{\Omega}\nabla w^{2}(t)-\nabla w^{2}(t-s)dz ds \\ &\leq&\frac{4}{l_{0}}\int_{0}^{t}E(t)-E(t-s)ds \\ &\leq&\frac{8}{l_{0}}\int_{0}^{t}E(x)dx\leq\frac{8k_{6}}{l_{0}}\mu_{1}(t). \end{eqnarray} (3.54)

    Now, we define \phi_{i}(t), i = 1, 2 by

    \begin{eqnarray} \phi_{1}(t)&: = &\mathcal{B}(t)\int_{0}^{t}\int_{\Omega}\vert\nabla v(t)-\nabla v(t-s)\vert^{2}dzds, \\ \phi_{2}(t)&: = &\mathcal{B}(t)\int_{0}^{t}\int_{\Omega}\vert\nabla w(t)-\nabla w(t-s)\vert^{2}dz ds \end{eqnarray} (3.55)

    where \mathcal{B}(t) = \frac{\mathcal{B}_{0}}{\mu_{1}(t)} and 0 < \mathcal{B}_{0} < \min\{1, \frac{l}{8k_{6}}\} .

    Then, by (3.53), we have

    \begin{equation} \phi_{i}(t) < 1, \quad \forall t > 0, \quad i = 1, 2 \end{equation} (3.56)

    Further, we suppose that \phi_{i}(t) > 0, \quad \forall t > 0, \quad i = 1, 2 . In addition to this, we define another functional \Gamma_{1}, \Gamma_{2} by

    \begin{eqnarray} \Gamma_{1}(t)&: = &-\int_{0}^{t}g_{1}'(s)\int_{\Omega}\vert\nabla v(t)-\nabla v(t-s)\vert^{2}dz ds, \\ \Gamma_{2}(t)&: = &-\int_{0}^{t}g_{2}'(s)\int_{\Omega}\vert\nabla w(t)-\nabla w(t-s)\vert^{2}dz ds \end{eqnarray} (3.57)

    Here, obviously \Gamma_{i}(t)\leq -cE'(t), \quad i = 1, 2 . As G_{i}(0) = 0, \quad i = 1, 2 and G_{i}(t) are convex strictly on (0 , \varrho] , then

    \begin{equation} G_{i}(\lambda z)\leq\lambda G_{i}(z), \quad 0 < \lambda < 1, \quad z\in(0, \varrho], \quad i = 1, 2. \end{equation} (3.58)

    Applying (2.3) and (3.56), we get

    \begin{eqnarray} \Gamma_{1}(t)& = &\frac{-1}{\mathcal{B}(t) \phi_{1}(t)}\int_{0}^{t} \phi_{1}(t)(g_{1}'(s))\int_{\Omega}\mathcal{B}(t)\vert \nabla v(t)-\nabla v(t-s)\vert^{2}dzds\\ &\geq&\frac{1}{\mathcal{B}(t) \phi_{1}(t)}\int_{0}^{t} \phi_{1}(t)\zeta_{1}(s)G_{1}(g_{1}(s))\int_{\Omega}\mathcal{B}(t)\vert \nabla v(t)-\nabla v(t-s)\vert^{2}dzds\\ &\geq &\frac{\zeta_{1}(t)}{\mathcal{B}(t) \phi_{1}(t)}\int_{0}^{t} G_{1}(\phi_{1}(t)g_{1}(s))\int_{\Omega}\mathcal{B}(t)\vert \nabla v(t)-\nabla v(t-s)\vert^{2}dzds\\ &\geq &\frac{\zeta_{1}(t)}{\mathcal{B}(t) }G_{1}\bigg(\frac{1}{\phi_{1}(t)}\int_{0}^{t} \phi_{1}(t)g_{1}(s)\int_{\Omega}\mathcal{B}(t)\vert \nabla v(t)-\nabla v(t-s)\vert^{2}dzds\bigg)\\ & = &\frac{\zeta_{1}(t)}{\mathcal{B}(t) }G_{1}\bigg(\mathcal{B}(t)\int_{0}^{t} g_{1}(s)\int_{\Omega}\vert\nabla v(t)-\nabla v(t-s)\vert^{2}dzds\bigg)\\ & = &\frac{\zeta_{1}(t)}{\mathcal{B}(t) }\overline{G_{1}}\bigg(\mathcal{B}(t)\int_{0}^{t} g_{1}(s)\int_{\Omega}\vert \nabla v(t)-\nabla v(t-s)\vert^{2}dzds\bigg). \end{eqnarray} (3.59)
    \begin{eqnarray} \Gamma_{2}(t)&\geq &\frac{\zeta_{2}(t)}{\mathcal{B}(t) }\overline{G_{2}}\bigg(\mathcal{B}(t)\int_{0}^{t} g_{2}(s)\int_{\Omega}\vert \nabla w(t)-\nabla w(t-s)\vert^{2}dzds\bigg). \end{eqnarray} (3.60)

    Taking the same steps, \overline{G_{i}}, i = 1, 2 are C^{2} -extension of G_{i} that are convex strictly and increasing strictlyon {\bf R}_{+} . From (3.59), we have the following

    \begin{eqnarray} \int_{0}^{t} g_{1}(s)\int_{\Omega}\vert\nabla v(t)-\nabla v(t-s)\vert^{2}dzds&\leq&\frac{1}{\mathcal{B}(t)}\overline{G_{1}}^{-1}\bigg(\frac{\mathcal{B}(t) \Gamma_{1}(t)}{\zeta_{1}(t)}\bigg)\\ \int_{0}^{t} g_{2}(s)\int_{\Omega}\vert\nabla w(t)-\nabla w(t-s)\vert^{2}dzds&\leq&\frac{1}{\mathcal{B}(t)}\overline{G_{2}}^{-1}\bigg(\frac{\mathcal{B}(t) \Gamma_{2}(t)}{\zeta_{2}(t)}\bigg). \end{eqnarray} (3.61)

    Putting (3.61) and (3.4) into (3.40), we have

    \begin{eqnarray} \mathcal{H}^{\prime }\left( t\right) &\leq& -k_{2}E(t)+\frac{c}{\mathcal{B}(t)}\overline{G_{1}}^{-1}\bigg(\frac{\mathcal{B}(t) \Gamma_{1}(t)}{\zeta_{1}(t)}\bigg)\\ &&+\frac{c}{\mathcal{B}(t)}\overline{G_{2}}^{-1}\bigg(\frac{\mathcal{B}(t) \Gamma_{2}(t)}{\zeta_{2}(t)}\bigg)+k_{6}\mu_{0}(t) \end{eqnarray} (3.62)

    Here, introduce \mathcal{K}_{1}(t) for \varepsilon_{0} < r by

    \begin{eqnarray} \mathcal{K}_{1}(t) = D^{\prime}\left(\varepsilon_{0} \frac{\mathcal{B}(t)E(t)}{E(0)}\right) \mathcal{H}(t)+E(t), \end{eqnarray} (3.63)

    in which D' = \min\{G_{1}, G_{2}\} and is equivalent to E(t) . Because of this E^{\prime}(t) \leq 0, \overline{G_{i}}^{\prime} > 0, and \overline{G_{i}}^{\prime \prime} > 0, i = 1, 2 . Also applying (3.62), we obtained that

    \begin{eqnarray} \mathcal{K}_{1}^{\prime}(t)& = & \varepsilon_{0}\bigg( \frac{E^{\prime}(t) \mathcal{B}(t)}{E(0)}+\frac{E(t) \mathcal{B}'(t)}{E(0)}\bigg) D^{\prime \prime}\left(\varepsilon_{0} \frac{E(t) \mathcal{B}(t)}{E(0)}\right) \mathcal{H}(t) \\ &&+D^{\prime}\left(\varepsilon_{0} \frac{E(t) \mathcal{B}(t)}{E(0)}\right) \mathcal{H}^{\prime}(t)+E^{\prime}(t) \\ &\leq &-k_{2} E(t) D^{\prime}\left(\varepsilon_{0} \frac{\mathcal{B}(t) E(t)}{E(0)}\right)+k_{6}\mu_{0}(t)D^{\prime}\left(\varepsilon_{0} \frac{\mathcal{B}(t) E(t)}{E(0)}\right) \\ &&\left.+\frac{c}{\mathcal{B}(t)} \overline{G_{1}}^{-1}\left(\frac{\mathcal{B}(t) \Gamma_{1}(t)}{\zeta_{1}(t)}\right)\right) D^{\prime}\left(\varepsilon_{0} \frac{\mathcal{B}(t) E(t)}{E(0)}\right)\\ &&\left.+\frac{c}{\mathcal{B}(t)} \overline{G_{2}}^{-1}\left(\frac{\mathcal{B}(t) \Gamma_{2}(t)}{\zeta_{2}(t)}\right)\right) D^{\prime}\left(\varepsilon_{0} \frac{\mathcal{B}(t) E(t)}{E(0)}\right)+E^{\prime}(t) \end{eqnarray} (3.64)

    According to [29], we introduce the conjugate function of \overline{G_{i}} by \overline{G_{i}}^{*}, which fulfills

    \begin{eqnarray} A B \leq \overline{G_{i}}^{*}\left(A\right)+\overline{G_{i}}\left(B\right), \quad i = 1, 2 \end{eqnarray} (3.65)

    For A = D^{\prime}\left(\varepsilon_{0}(E(t)\mathcal{B}(t)) /(E(0)))\right) \text { and } B_{i} = \overline{G_{i}}^{-1}((\mathcal{B}(t) \Gamma_{i}(t))/(\zeta_{i}(t))), \quad i = 1, 2 and applying (3.64), we have

    \begin{eqnarray} \mathcal{K}_{1}^{\prime}(t) &\leq &-k_{2} E(t) D^{\prime}\left(\varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\right)+k_{6}\mu_{0}(t)D^{\prime}\left(\varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\right) \\ &&+\frac{c}{\mathcal{B}(t)} \overline{G_{1}}^{*}\left(D^{\prime}\left(\varepsilon_{0} \frac{ E(t)\mathcal{B}(t)}{E(0)}\right)\right)+\frac{c}{\mathcal{B}(t)} \frac{\mathcal{B}(t) \Gamma_{1}(t)}{\zeta_{1}(t)}\\ &&+\frac{c}{\mathcal{B}(t)} \overline{G_{2}}^{*}\left(D^{\prime}\left(\varepsilon_{0} \frac{ E(t)\mathcal{B}(t)}{E(0)}\right)\right)+\frac{c}{\mathcal{B}(t)} \frac{\mathcal{B}(t) \Gamma_{2}(t)}{\zeta_{2}(t)}+E^{\prime}(t) \\ &\leq &-k_{2} E(t) D^{\prime}\left(\varepsilon_{0} \frac{ E(t)\mathcal{B}(t)}{E(0)}\right)+k_{6}\mu_{0}(t)D^{\prime}\left(\varepsilon_{0} \frac{E(t) \mathcal{B}(t) }{E(0)}\right)\\ &&+\frac{c}{\mathcal{B}(t)}D'\bigg( \varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\bigg) (\overline{G_{1}}^{\prime})^{-1}\bigg[D'\left(\varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\right)\bigg] \\ &&+\frac{c}{\mathcal{B}(t)}D'\bigg( \varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\bigg) (\overline{G_{2}}^{\prime})^{-1}\bigg[D'\left(\varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\right)\bigg]\\ &&+\frac{c \Gamma_{1}(t)}{\zeta{1}(t)} +\frac{c \Gamma_{2}(t)}{\zeta_{2}(t)}. \end{eqnarray} (3.66)

    Here, we multiply (3.66) by \zeta_{0}(t) and get

    \begin{eqnarray} \zeta_{0}(t) \mathcal{K}_{1}^{\prime}(t) &\leq &-k_{2}\zeta_{0}(t) E(t) D^{\prime}\left(\varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\right)+k_{6}\zeta_{0}(t)\mu_{0}(t)D^{\prime}\left(\varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\right)\\ &&+\frac{2c\zeta_{0}(t)}{\mathcal{B}(t)} \varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}D^{\prime}\left(\varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\right)+c \Gamma_{1}(t) +c \Gamma_{2}(t) \\ &\leq &-k_{2}\zeta_{0}(t) E(t) D^{\prime}\left(\varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\right)+k_{6}\zeta_{0}(t)\mu_{0}(t)D^{\prime}\left(\varepsilon_{0} \frac{E(t) \mathcal{B}(t) }{E(0)}\right)\\ &&+\frac{2c\zeta_{0}(t)}{\mathcal{B}(t)} \varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)} D^{\prime}\left(\varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\right)-c E^{\prime}(t) \end{eqnarray} (3.67)

    where we utilized the following \varepsilon_{0}(\mathcal{B}(t) E(t) / E(0)) < r , D^{\prime} = \min \{G_{1}, G_{2}\} and \Gamma_{i} < -cE'(t), i = 1, 2 , and define the functional \mathcal{K}_{2}(t) as

    \begin{eqnarray} \mathcal{K}_{2}(t) = \zeta_{0}(t) \mathcal{K}_{1}(t)+c E(t) \end{eqnarray} (3.68)

    Effortlessly, one can prove that \mathcal{K}_{2}(t) \sim E(t) , i.e., one can find two positive constants m_{1} and m_{2} in a manner that

    \begin{eqnarray} m_{1} \mathcal{K}_{2}(t) \leq E(t) \leq m_{2} \mathcal{K}_{2}(t), \end{eqnarray} (3.69)

    then, we have

    \begin{eqnarray} \mathcal{K}_{2}^{\prime}(t) &\leq&-\beta_{6} \zeta_{0}(t) \frac{ E(t)}{E(0)} D^{\prime}\left(\varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\right)+k_{6}\zeta_{0}(t)\mu_{0}(t)D^{\prime}\left(\varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\right)\\ & = &-\beta_{6} \frac{\zeta_{0}(t)}{\mathcal{B}(t)} D_{2}\left(\frac{ E(t) \mathcal{B}(t)}{E(0)}\right)+k_{6}\zeta_{0}(t)\mu_{0}(t)D^{\prime}\left(\varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\right), \end{eqnarray} (3.70)

    where \beta_{6} = (k_{2}E(0)-2c\varepsilon_{0}) and D_{2}(t) = t D^{\prime}\left(\varepsilon_{0} t\right) .

    Choosing \varepsilon_{0} so small such that \beta_{6} > 0 , since D_{2}^{\prime}(t) = D^{\prime}\left(\varepsilon_{0} t\right)+\varepsilon_{0} t D^{\prime \prime}\left(\varepsilon_{0} t\right) . As D_{2}^{\prime}(t), D_{2}(t) > 0 on (0 , 1] and G_{i} on (0 , \varrho] are strictly increasing. Applying Young's inequality (3.65) on the last term in (3.70)

    with A = D^{\prime}\left(\varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\right) and B = \frac{k_{6}}{\delta}\mu(t) , we find

    \begin{eqnarray} k_{6}\mu_{0}(t)D^{\prime}\left(\varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\right)& = &\frac{\sigma}{\mathcal{B}(t)}\bigg(\frac{k_{6}}{\sigma}\mathcal{B}(t)\mu_{0}(t)\bigg)\bigg(D^{\prime}\left(\varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\right)\bigg)\\ & < &\frac{\sigma}{\mathcal{B}(t)}D_{3}^{*}\bigg(\frac{k_{6}}{\sigma}\mathcal{B}(t)\mu_{0}(t)\bigg)+\frac{\sigma}{\mathcal{B}(t)}D_{3}\bigg(D^{\prime}\left(\varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\right)\bigg)\\ & < &\frac{\sigma}{\mathcal{B}(t)}D_{4}\bigg(\frac{k_{6}}{\sigma}\mathcal{B}(t)\mu_{0}(t)\bigg)\\ &&+\frac{\sigma}{\mathcal{B}(t)}\left(\varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\right)D^{\prime}\left(\varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\right)\\ & < &\frac{\sigma}{\mathcal{B}(t)}D_{4}\bigg(\frac{k_{6}}{\sigma}\mathcal{B}(t)\mu_{0}(t)\bigg)+\frac{\sigma\varepsilon_{0} }{\mathcal{B}(t)}D_{2}\left(\varepsilon_{0} \frac{ E(t) \mathcal{B}(t)}{E(0)}\right). \end{eqnarray} (3.71)

    Here, choose \sigma small enough in a manner that \beta_{6}-\sigma\varepsilon_{0} > 0 andcombining (3.70) and (3.71), we have

    \begin{eqnarray} \mathcal{K}_{2}^{\prime}(t) &\leq&-\beta_{7} \frac{\zeta_{0}(t)}{\mathcal{B}(t)} D_{2}\left(\frac{ E(t) \mathcal{B}(t)}{E(0)}\right)+\frac{\sigma\zeta_{0}(t)}{\mathcal{B}(t)}D_{4}\bigg(\frac{k_{6}}{\delta}\mathcal{B}(t)\mu_{0}(t)\bigg). \end{eqnarray} (3.72)

    where \beta_{7} = \beta_{6}-\sigma \varepsilon_{0} > 0 , D_{3}(t) = t D'^{-1}\left(t\right) and D_{4}(t) = \overline{D}_{3}^{*}\left(t\right) .

    In light of fact E' < 0 and \mathcal{B}' < 0 , then D_{2}(\frac{E(t) \mathcal{B}(t)}{E(0)}) is decreasing. As a consequences of this, for 0\leq t\leq T , we have

    \begin{equation} D_{2}\bigg(\frac{E(T) \mathcal{B}(T)}{E(0)}\bigg) < D_{2}\bigg(\frac{E(t) \mathcal{B}(t)}{E(0)}\bigg). \end{equation} (3.73)

    In the next step, combine (3.72) with (3.73) and multiply by \mathcal{B}(t) , the following is obtained

    \begin{equation} \mathcal{B}(t)\mathcal{K}_{2}^{\prime}(t)+\beta_{7}\zeta_{0}(t) D_{2}\left(\frac{ E(T) \mathcal{B}(T)}{E(0)}\right) < \sigma\zeta_{0}(t)D_{4}\bigg(\frac{k_{6}}{\sigma}\mathcal{B}(t)\mu_{0}(t)\bigg). \end{equation} (3.74)

    Since \mathcal{B}' < 0 , then for any 0 < t < T

    \begin{eqnarray} (\mathcal{B}\mathcal{K}_{2})^{\prime}(t)+\beta_{7}\zeta_{0}(t) D_{2}\left(\frac{ E(T) \mathcal{B}(T)}{E(0)}\right)& < &\sigma \zeta_{0}(t)D_{4}\bigg(\frac{k_{6}}{\sigma}\mathcal{B}(t)\mu_{0}(t)\bigg)\\ & < &\sigma\widehat{\zeta}(t)D_{4}\bigg(\frac{k_{6}}{\sigma}\mathcal{B}(t)\mu_{0}(t)\bigg). \end{eqnarray} (3.75)

    Simplify (3.75) over [0, T] and apply \mathcal{B}(0) = 1 , the following is obtained

    \begin{equation} D_{2}\left(\frac{ E(T) \mathcal{B}(T)}{E(0)}\right)\int_{0}^{T}\zeta_{0}(t)dt < \frac{\mathcal{K}_{2}(0)}{\beta_{7}}+\frac{\sigma}{\beta_{7}}\int_{0}^{T}\widehat{\zeta}(t)D_{4}\bigg(\frac{k_{6}}{\sigma}\mathcal{B}(t)\mu_{0}(t)\bigg)dt. \end{equation} (3.76)

    Consequently, we have

    \begin{equation} D_{2}\left(\frac{ E(T) \mathcal{B}(T)}{E(0)}\right) < \frac{\frac{\mathcal{K}_{2}(0)}{\beta_{7}}+\frac{\sigma}{\beta_{7}}\int_{0}^{T}\widehat{\zeta}(t)D_{4}(\frac{k_{6}}{\sigma}\mathcal{B}(t)\mu_{0}(t))dt}{\int_{0}^{T}\zeta_{0}(t)dt}. \end{equation} (3.77)

    As a results of this, we obtain

    \begin{equation} \left(\frac{ E(T) \mathcal{B}(T)}{E(0)}\right) < D_{2}^{-1}\bigg(\frac{\frac{\mathcal{K}_{2}(0)}{\beta_{7}}+\frac{\sigma}{\beta_{7}}\int_{0}^{T}\widehat{\zeta}(t)D_{4}(\frac{k_{6}}{\sigma}\mathcal{B}(t)\mu_{0}(t))dt}{\int_{0}^{T}\zeta_{0}(t)dt}\bigg). \end{equation} (3.78)

    As a result of this, we get

    \begin{equation} E(T) < \frac{ E(0)}{\mathcal{B}(T)}D_{2}^{-1}\bigg(\frac{\frac{\mathcal{K}_{2}(0)}{\beta_{7}}+\frac{\sigma}{\beta_{7}}\int_{0}^{T}\widehat{\zeta}(t)D_{4}(\frac{k_{6}}{\sigma}\mathcal{B}(t)\mu_{0}(t))dt}{\int_{0}^{T}\zeta_{0}(t)dt}\bigg). \end{equation} (3.79)

    where, we have (3.29) with \varsigma_{1} = \frac{ E(0)}{\mathcal{B}(T)} , \varsigma_{2} = \frac{\mathcal{K}_{2}(0)}{\beta_{7}} , \varsigma_{3} = \frac{\sigma}{\beta_{7}} , and \varsigma_{4}(t) = \frac{k_{6}}{\sigma}\mathcal{B}(t) .

    Hence, the required result is obtained 3.8.

    The purpose of this work was to study when the coupled system of nonlinear viscoelastic wave equations with distributed delay components, infinite memory and Balakrishnan-Taylor damping. Assume the kernels g_{i} :{\bf R}_{+}\rightarrow {\bf R}_{+} holds true the below

    g_{i}'(t)\leq-\zeta_{i}(t)G_{i}(g_{i}(t)), \quad \forall t\in {\bf R}_{+}, \quad \text{for} \quad i = 1, 2,

    in which \zeta_{i} and G_{i} are functions. We prove the stability of the system under this highly generic assumptions on the behaviour of g_i at infinity and by dropping the boundedness assumptions in the historical data. This type of problem is frequently found in some mathematical models in applied sciences. Especially in the theory of viscoelasticity. What interests us in this current work is the combination of these terms of damping, which dictates the emergence of these terms in the problem. In the next work, we will try to using the same method with same problem. But in added of other dampings.

    The researchers would like to thank the Deanship of Scientific Research, Qassim University for funding the publication of this project.

    The authors declare there is no conflicts of interest.



    [1] I. Goodfellow, Y. Bengio, A. Courville, Deep learning, MIT press, Cambridge, 2016.
    [2] H. Xue, B. Huang, M. Qin, H. Zhou, H. Yang, Edge computing for internet of things: A survey, in 2020 International Conferences on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics), 2020,755–760. https://doi.org/10.1109/iThings-GreenCom-CPSCom-SmartData-Cybermatics50389.2020.00130
    [3] J. Shashirangana, H. Padmasiri, D. Meedeniya, C. Perera, Automated license plate recognition: a survey on methods and techniques, IEEE Access, 9 (2020), 11203–11225. https://doi.org/10.1109/ACCESS.2020.3047929 doi: 10.1109/ACCESS.2020.3047929
    [4] H. Padmasiri, J. Shashirangana, D. Meedeniya, O. Rana, C. Perera, Automated license plate recognition for resource-constrained environments, Sensors, 22 (2022), 1434. https://doi.org/10.3390/s22041434 doi: 10.3390/s22041434
    [5] M. Diamanti, P. Charatsaris, E. E. Tsiropoulou, S. Papavassiliou, Incentive mechanism and resource allocation for edge-fog networks driven by multi-dimensional contract and game theories, IEEE Open J. Commun. Soc., 3 (2022), 435–452. https://doi.org/10.1109/OJCOMS.2022.3154536 doi: 10.1109/OJCOMS.2022.3154536
    [6] X. Chen, H. Xu, G. Zhang, Y. Chen, R. Li, Unsupervised deep learning for binary offloading in mobile edge computation network, Wirel. Pers. Commun., 124 (2022), 1841–1860. https://doi.org/10.1007/s11277-021-09433-9 doi: 10.1007/s11277-021-09433-9
    [7] R. Chen, X. Wang, Maximization of value of service for mobile collaborative computing through situation aware task offloading, IEEE. Trans. Mob. Comput., 99 (2021), 1–1. https://doi.org/10.1109/TMC.2021.3086687 doi: 10.1109/TMC.2021.3086687
    [8] Y. Fu, X. Yang, P. Yang, A. K. Y. Wong, Z. Shi, H. Wang, et al., Energy-efficient offloading and resource allocation for mobile edge computing enabled mission-critical internet-of-things systems, EURASIP J. Wirel. Commun. Networking, 26 (2021), 1–16. https://doi.org/10.1186/s13638-021-01905-7 doi: 10.1186/s13638-021-01905-7
    [9] X. Li, L. Huang, H. Wang, S. Bi, Y. A. Zhang, An integrated optimization-learning framework for online combinatorial computation offloading in mec networks, IEEE Wirel. Commun., 29 (2022), 170–177. https://doi.org/10.1109/MWC.201.2100155 doi: 10.1109/MWC.201.2100155
    [10] C. Jiang, Y. Li, J. Su, Q. Chen, Research on new edge computing network architecture and task offloading strategy for internet of things, Wirel. Networks, (2021), 1–13. https://doi.org/10.1007/s11276-020-02516-8 doi: 10.1007/s11276-020-02516-8
    [11] Y. Mao, C. You, J. Zhang, K. Huang, K. B. Letaief, A survey on mobile edge computing: The communication perspective, IEEE Commun. Surv. Tutor., 19 (2017), 2322–2358. https://doi.org/10.1109/COMST.2017.2745201 doi: 10.1109/COMST.2017.2745201
    [12] D. Xu, H. Zhu, Legitimate surveillance of suspicious computation offloading in mobile edge computing networks, IEEE Trans. Commun., 70 (2022), 2648–2662. https://doi.org/10.1109/TCOMM.2022.3151767 doi: 10.1109/TCOMM.2022.3151767
    [13] V. D. Tuong, T. P. Truong, T. Nguyen, W. Noh, S. Cho, Partial computation offloading in noma-assisted mobile-edge computing systems using deep reinforcement learning, IEEE Internet Things J., 8 (2021), 13196–13208. https://doi.org/10.1109/JIOT.2021.3064995 doi: 10.1109/JIOT.2021.3064995
    [14] X. Deng, J. Yin, P. Guan, N. N. Xiong, L. Zhang, S. Mumtaz, Intelligent delay-aware partial computing task offloading for multi-user industrial internet of things through edge computing, IEEE Internet Things J., (2021), 1–1. https://doi.org/10.1109/JIOT.2021.3123406 doi: 10.1109/JIOT.2021.3123406
    [15] J. Baek, G. Kaddoum, Online partial offloading and task scheduling in sdn-fog networks with deep recurrent reinforcement learning, IEEE Internet Things J., 9 (2022), 11578–11589. https://doi.org/10.1109/JIOT.2021.3130474 doi: 10.1109/JIOT.2021.3130474
    [16] A. Yousafzai, A. Gani, R. M. Noor, A. Naveed, R. W. Ahmad, V. Chang, Computational offloading mechanism for native and android runtime based mobile applications, J. Syst. Softw., 121 (2016), 28–39. https://doi.org/10.1016/j.jss.2016.07.043 doi: 10.1016/j.jss.2016.07.043
    [17] H. Wu, Z. Zhang, C. Guan, K. Wolter, M. Xu Collaborate edge and cloud computing with distributed deep learning for smart city internet of things, IEEE Internet Things J., 7 (2020), 8099–8110. https://doi.org/10.1109/JIOT.2020.2996784 doi: 10.1109/JIOT.2020.2996784
    [18] F. Saeik, M. Avgeris, D. Spatharakis, N. Santi, D. Dechouniotis, J. Violos, et al., Task offloading in edge and cloud computing: A survey on mathematical, artificial intelligence and control theory solutions, IEEE Internet Things J., 195 (2021), 108177. https://doi.org/10.1016/j.comnet.2021.108177 doi: 10.1016/j.comnet.2021.108177
    [19] F. Wang, M. Zhang, X. Wang, X. Ma, J. Liu, Deep learning for edge computing applications: A state-of-the-art survey, IEEE Access, 8 (2020), 58322–58336. https://doi.org/10.1109/ACCESS.2020.2982411 doi: 10.1109/ACCESS.2020.2982411
    [20] L. Huang, X. Feng, A. Feng, Y. Huang, L. P. Qian, Distributed deep learning-based offloading for mobile edge computing networks, Mobile Networks Appl., 27 (2022), 1123–1130. https://doi.org/10.1007/s11036-018-1177-x doi: 10.1007/s11036-018-1177-x
    [21] L. Huang, S. Bi, Y. A. Zhang, Deep reinforcement learning for online computation offloading in wireless powered mobile-edge computing networks, IEEE. Trans. Mob. Comput., 19 (2019), 2581–2593. https://doi.org/10.1109/TMC.2019.2928811 doi: 10.1109/TMC.2019.2928811
    [22] A. Shakarami, A. Shahidinejad, M. Ghobaei-Arani, An autonomous computation offloading strategy in mobile edge computing: A deep learning-based hybrid approach, J. Network Comput. Appl., 178 (2021), 102974. https://doi.org/10.1016/j.jnca.2021.102974 doi: 10.1016/j.jnca.2021.102974
    [23] B. Mao, F. Tang, Y. Kawamoto, N. Kato, Optimizing computation offloading in satellite-uav-served 6G IoT: A deep learning approach, IEEE Network, 35 (2021), 102–108. https://doi.org/10.1109/MNET.011.2100097 doi: 10.1109/MNET.011.2100097
    [24] L. Kuang, T. Gong, S. OuYang, H. Gao, S. Deng, Offloading decision methods for multiple users with structured tasks in edge computing for smart cities, Future Gener. Comput. Syst., 105 (2020), 717–729. https://doi.org/10.1016/j.future.2019.12.039 doi: 10.1016/j.future.2019.12.039
    [25] Z. Liao, J. Peng, B. Xiong, J. Huang, Adaptive offloading in mobile-edge computing for ultradense cellular networks based on genetic algorithm, J. Cloud Comput., 10 (2021), 1–16. https://doi.org/10.1186/s13677-021-00232-y doi: 10.1186/s13677-021-00232-y
    [26] G. Peng, H. Wu, H. Wu, K. Wolter, Constrained multiobjective optimization for iot-enabled computation offloading in collaborative edge and cloud computing, IEEE Internet Things J., 8 (2021), 13723–13736. https://doi.org/10.1109/JIOT.2021.3067732 doi: 10.1109/JIOT.2021.3067732
    [27] X. Xu, Q. Liu, Y. Luo, K. Peng, X. Zhang, S. Meng, et al., A computation offloading method over big data for iot-enabled cloud-edge computing, Future Gener. Comput. Syst., 95 (2019), 522–533. https://doi.org/10.1016/j.future.2018.12.055 doi: 10.1016/j.future.2018.12.055
    [28] J. Bi, H. Yuan, S. Duanmu, M. Zhou, A. Abusorrah, Energy-optimized partial computation offloading in mobile-edge computing with genetic simulated-annealing-based particle swarm optimization, IEEE Internet Things J., 8 (2020), 3774–3785. https://doi.org/10.1109/JIOT.2020.3024223 doi: 10.1109/JIOT.2020.3024223
    [29] T. Alfakih, M. M. Hassan, M. Al-Razgan, Multi-objective accelerated particle swarm optimization with dynamic programing technique for resource allocation in mobile edge computing, IEEE Access, 9 (2021), 167503–167520. https://doi.org/10.1109/ACCESS.2021.3134941 doi: 10.1109/ACCESS.2021.3134941
    [30] M. O. Lawal, Tomato detection based on modified YOLOv3 framework, Sci. Rep., 11 (2021), 1–11. https://doi.org/10.1038/s41598-021-81216-5 doi: 10.1038/s41598-021-81216-5
    [31] A. M. Roy, J. Bhaduri, Real-time growth stage detection model for high degree of occultation using densenet-fused YOLOv4, Comput. Electron. Agric., 193 (2022), 106694. https://doi.org/10.1016/j.compag.2022.106694 doi: 10.1016/j.compag.2022.106694
    [32] A. M. Roy, R. Bose, J. Bhaduri, A fast accurate fine-grain object detection model based on yolov4 deep neural network, Neural Comput. Appl., 34 (2022), 3895–3921. https://doi.org/10.1007/s00521-021-06651-x doi: 10.1007/s00521-021-06651-x
    [33] Y. Yuan, W. Zou, Y. Zhao, X. Wang, X. Hu, N. Komodakis, A robust and efficient approach to license plate detection, IEEE Trans. Image Process., 26 (2016), 1102–1114. https://doi.org/10.1109/TIP.2016.2631901 doi: 10.1109/TIP.2016.2631901
    [34] M. R. Asif, Q. Chun, S. Hussain, M. S. Fareed, S. Khan, Multinational vehicle license plate detection in complex backgrounds, J. Vis. Commun. Image Represent., 46 (2017), 176–186. https://doi.org/10.1016/j.jvcir.2017.03.020 doi: 10.1016/j.jvcir.2017.03.020
    [35] U. Yousaf, A. Khan, H. Ali, F. G. Khan, Z. U. Rehman, S. Shah, et al., A deep learning based approach for localization and recognition of pakistani vehicle license plates, Sensors, 21 (2021), 76696. https://doi.org/10.3390/s21227696 doi: 10.3390/s21227696
    [36] Y. Yang, Y. Gong, Y. Wu, Intelligent reflecting surface aided mobile edge computing with binary offloading: Energy minimization for IoT devices, IEEE Internet Things J., 9 (2022), 12973–12983. https://doi.org/10.1109/JIOT.2022.3173027 doi: 10.1109/JIOT.2022.3173027
    [37] C. You, Y. Zeng, R. Zhang, K. Huang, Asynchronous mobile-edge computation offloading: Energy-efficient resource management, IEEE Trans. Wirel. Commun., 17 (2018), 7590–7605. https://doi.org/10.1109/TWC.2018.2868710 doi: 10.1109/TWC.2018.2868710
    [38] G. Zhang, F. Shen, Z. Liu, Y. Yang, K. Wang, M. Zhou, Femto: Fair and energy-minimized task offloading for fog-enabled iot networks, IEEE Internet Things J., 6 (2018), 4388–4400. https://doi.org/10.1109/JIOT.2018.2887229 doi: 10.1109/JIOT.2018.2887229
    [39] Y. Chen, N. Zhang, Y. Zhang, X. Chen, W. Wu, X. Shen, Energy efficient dynamic offloading in mobile edge computing for internet of things, IEEE Trans. Cloud Comput., 9 (2019), 1050–1060. https://doi.org/10.1109/TCC.2019.2898657 doi: 10.1109/TCC.2019.2898657
    [40] Y. Deng, Z. Chen, X. Yao, S. Hassan, A. M. Ibrahim, Parallel offloading in green and sustainable mobile edge computing for delay-constrained IoT system, IEEE Trans. Veh. Technol., 68 (2019), 12202–12214. https://doi.org/10.1109/TVT.2019.2944926 doi: 10.1109/TVT.2019.2944926
    [41] J. Deng, J. Guo, E. Ververas, I. Kotsia, S. Zafeiriou, Retinaface: Single-shot multi-level face localisation in the wild, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 5203–5012. https://doi.org/10.1109/CVPR42600.2020.00525
    [42] Y. Wu, A survey on population-based meta-heuristic algorithms for motion planning of aircraft, Swarm Evol. Comput., 62 (2021), 100844. https://doi.org/10.1016/j.swevo.2021.100844 doi: 10.1016/j.swevo.2021.100844
    [43] T. X. Tran, D. Pompili, Joint task offloading and resource allocation for multi-server mobile-edge computing networks, IEEE Trans. Veh. Technol., 68 (2018), 856–868. https://doi.org/10.1109/TVT.2018.2881191 doi: 10.1109/TVT.2018.2881191
    [44] X. Chu, D. Lopez-Perez, Y. Yang, F. Gunnarsson, Heterogeneous cellular networks: Theory, simulation and deployment, Cambridge University Press, Cambridge, 2013.
    [45] Z. Xu, W. Yang, A. Meng, N. Lu, H. Huang, C. Ying, et al. Towards end-to-end license plate detection and recognition: A large dataset and baseline, in European Conference on Computer Vision, (2018), 255–271. https://doi.org/10.1007/978-3-030-01261-8_16
    [46] N. Leite, F. Mel´ıcio, A. C. Rosa, A fast simulated annealing algorithm for the examination timetabling problem, Expert Syst. Appl., 122 (2019), 137–151. https://doi.org/10.1016/j.eswa.2018.12.048 doi: 10.1016/j.eswa.2018.12.048
    [47] S. Katoch, S. S. Chauhan, V. Kumar, A review on genetic algorithm: Past, present, and future, Multimed. Tools Appl., 80 (2021), 8091–8126. https://doi.org/10.1007/s11042-020-10139-6 doi: 10.1007/s11042-020-10139-6
    [48] E. H. Houssein, A. G. Gad, K. Hussain, P. N. Suganthan, Major advances in particle swarm optimization: theory, analysis, and application, Swarm Evol. Comput., 63 (2021), 100868. https://doi.org/10.1016/j.swevo.2021.100868 doi: 10.1016/j.swevo.2021.100868
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2126) PDF downloads(93) Cited by(0)

Figures and Tables

Figures(10)  /  Tables(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog