Loading [MathJax]/jax/element/mml/optable/SuppMathOperators.js
Research article Special Issues

Deep quantization network with visual-semantic alignment for zero-shot image retrieval

  • Received: 01 April 2023 Revised: 19 May 2023 Accepted: 21 May 2023 Published: 01 June 2023
  • Approximate nearest neighbor (ANN) search has become an essential paradigm for large-scale image retrieval. Conventional ANN search requires the categories of query images to been seen in the training set. However, facing the rapid evolution of newly-emerging concepts on the web, it is too expensive to retrain the model via collecting labeled data with the new (unseen) concepts. Existing zero-shot hashing methods choose the semantic space or intermediate space as the embedding space, which ignore the inconsistency of visual space and semantic space and suffer from the hubness problem on the zero-shot image retrieval task. In this paper, we present an novel deep quantization network with visual-semantic alignment for efficient zero-shot image retrieval. Specifically, we adopt a multi-task architecture that is capable of 1) learning discriminative and polymeric image representations for facilitating the visual-semantic alignment; 2) learning discriminative semantic embeddings for knowledge transfer; and 3) learning compact binary codes for aligning the visual space and the semantic space. We compare the proposed method with several state-of-the-art methods on several benchmark datasets, and the experimental results validate the superiority of the proposed method.

    Citation: Huixia Liu, Zhihong Qin. Deep quantization network with visual-semantic alignment for zero-shot image retrieval[J]. Electronic Research Archive, 2023, 31(7): 4232-4247. doi: 10.3934/era.2023215

    Related Papers:

    [1] Nina Huo, Bing Li, Yongkun Li . Global exponential stability and existence of almost periodic solutions in distribution for Clifford-valued stochastic high-order Hopfield neural networks with time-varying delays. AIMS Mathematics, 2022, 7(3): 3653-3679. doi: 10.3934/math.2022202
    [2] Yongkun Li, Xiaoli Huang, Xiaohui Wang . Weyl almost periodic solutions for quaternion-valued shunting inhibitory cellular neural networks with time-varying delays. AIMS Mathematics, 2022, 7(4): 4861-4886. doi: 10.3934/math.2022271
    [3] Ardak Kashkynbayev, Moldir Koptileuova, Alfarabi Issakhanov, Jinde Cao . Almost periodic solutions of fuzzy shunting inhibitory CNNs with delays. AIMS Mathematics, 2022, 7(7): 11813-11828. doi: 10.3934/math.2022659
    [4] Yuwei Cao, Bing Li . Existence and global exponential stability of compact almost automorphic solutions for Clifford-valued high-order Hopfield neutral neural networks with D operator. AIMS Mathematics, 2022, 7(4): 6182-6203. doi: 10.3934/math.2022344
    [5] Jin Gao, Lihua Dai . Weighted pseudo almost periodic solutions of octonion-valued neural networks with mixed time-varying delays and leakage delays. AIMS Mathematics, 2023, 8(6): 14867-14893. doi: 10.3934/math.2023760
    [6] Hedi Yang . Weighted pseudo almost periodicity on neutral type CNNs involving multi-proportional delays and D operator. AIMS Mathematics, 2021, 6(2): 1865-1879. doi: 10.3934/math.2021113
    [7] Yanshou Dong, Junfang Zhao, Xu Miao, Ming Kang . Piecewise pseudo almost periodic solutions of interval general BAM neural networks with mixed time-varying delays and impulsive perturbations. AIMS Mathematics, 2023, 8(9): 21828-21855. doi: 10.3934/math.20231113
    [8] Zhangir Nuriyev, Alfarabi Issakhanov, Jürgen Kurths, Ardak Kashkynbayev . Finite-time synchronization for fuzzy shunting inhibitory cellular neural networks. AIMS Mathematics, 2024, 9(5): 12751-12777. doi: 10.3934/math.2024623
    [9] Abdulaziz M. Alanazi, R. Sriraman, R. Gurusamy, S. Athithan, P. Vignesh, Zaid Bassfar, Adel R. Alharbi, Amer Aljaedi . System decomposition method-based global stability criteria for T-S fuzzy Clifford-valued delayed neural networks with impulses and leakage term. AIMS Mathematics, 2023, 8(7): 15166-15188. doi: 10.3934/math.2023774
    [10] Xiaofang Meng, Yongkun Li . Pseudo almost periodic solutions for quaternion-valued high-order Hopfield neural networks with time-varying delays and leakage delays on time scales. AIMS Mathematics, 2021, 6(9): 10070-10091. doi: 10.3934/math.2021585
  • Approximate nearest neighbor (ANN) search has become an essential paradigm for large-scale image retrieval. Conventional ANN search requires the categories of query images to been seen in the training set. However, facing the rapid evolution of newly-emerging concepts on the web, it is too expensive to retrain the model via collecting labeled data with the new (unseen) concepts. Existing zero-shot hashing methods choose the semantic space or intermediate space as the embedding space, which ignore the inconsistency of visual space and semantic space and suffer from the hubness problem on the zero-shot image retrieval task. In this paper, we present an novel deep quantization network with visual-semantic alignment for efficient zero-shot image retrieval. Specifically, we adopt a multi-task architecture that is capable of 1) learning discriminative and polymeric image representations for facilitating the visual-semantic alignment; 2) learning discriminative semantic embeddings for knowledge transfer; and 3) learning compact binary codes for aligning the visual space and the semantic space. We compare the proposed method with several state-of-the-art methods on several benchmark datasets, and the experimental results validate the superiority of the proposed method.



    As stated in [1], a nervous system in the real world, synaptic transmission is a noisy process caused by random fluctuations in neurotransmitter release and other probabilistic factors. Therefore, it is necessary to consider stochastic neural networks (NNs) because random inputs may change the dynamics of the (NN) [2,3,4,5].

    SICNNs, which were proposed in [6], have attracted the interest of many scholars since their introduction due to their special roles in psychophysics, robotics, adaptive pattern recognition, vision, and image processing. In the above applications, their dynamics play an important role. Thereupon, their various dynamics have been extensively studied (see [7,8,9,10,11,12,13] and references therein). However, there is limited research on the dynamics of stochastic SICNNs. Therefore, it is necessary to further study the dynamics of such NNs.

    On the one hand, research on the dynamics of NNs that take values from a non commutative algebra, such as quaternion-valued NNs [14,15,16], octonion-valued NNs [17,18,19,20], and Clifford-valued NNs [21,22,23], has gained the interest of many researchers because such neural networks can include typical real-valued NNs as their special cases, and they have superior multi-dimensional signal processing and data storage capabilities compared to real-valued NNs. It is worth mentioning that in recent years, many authors have conducted extensive research on various dynamics of Clifford-valued NNs, such as the existence, multiplicity and stability of equilibrium points, and the existence, multiplicity and stability of almost periodic solutions as well as the synchronization problems [22,23,24,25,26,27,28,29,30]. However, most of the existing results for the dynamics of Clifford-valued NNs has been obtained through decomposition methods [24,25,26,27]. However, the results obtained by decomposition methods are generally not convenient for direct application, and there is little research on Clifford-valued NNs using non decomposition methods [28,29,30]. Therefore, further exploration of using non decomposition methods to study the dynamics of Clifford-valued NNs has important theoretical significance and application value.

    On the other hand, Bohr's almost periodicity is a special case of Stepanov's almost periodicity, but there is little research on the Stepanov periodic oscillations of NNs [19,31,32,33], especially the results of Stepanov's almost periodic solutions of stochastic SICNNs with discrete and infinitely distributed delays have not been published yet.

    Motivated by the discussion above, our purpose of this article is to establish the existence and global exponential stability of Stepanov almost periodic solutions in the distribution sense for a stochastic Clifford-valued SICNN with mixed delays via non decomposition methods.

    The subsequent sections of this article are organized as follows. Section 2 introduces some concepts, notations, and basic lemmas and gives a model description. Section 3 discusses the existence and stability of Stepanov almost periodic solutions in the distribution sense of the NN under consideration. An example is provided in Section 4. Finally, Section 5 provides a brief conclusion.

    Let A={ϑPxϑeϑ,xϑR} be a real Clifford-algebra with N generators e=e0=1, and eh,h=1,2,,N, where P={,0,1,2,,ϑ,,12N}, e2i=1,i=1,2,,r,e2i=1,i=r+1,r+2,,m,eiej+ejei=0,ij and i,j=1,2,,N. For x=ϑPxϑeϑA, we indicate x=maxϑP{|xϑ|},xc=ϑxϑeϑ,x0=xxc, and for x=(x11,x12,,x1n,x21,x22,,x2n,,xmn)TAm×n, we denote x0=max{xij,1im,1jn}. The derivative of x(t)=ϑPxϑ(t)eϑ is defined by ˙x(t)=ϑP˙xϑ(t)eϑ and the integral of x(t)=ϑPxϑ(t)eϑ over the interval [a,b] is defined by bax(t)dt=ϑP(baxϑ(t)dt)eϑ.

    Let (Y,ρ) be a separable metric space and P(Y) the collection of all probability measures defined on Borel σ-algebra of Y. Denote by Cb(Y) the set of continuous functions f:YR with g:=supxY{|g(x)|}<.

    For gCb(Y), μ,νP(Y), let us define

    gL=supxy|g(x)g(y)|ρ(x,y),gBL=max{g,gL},
    ρBL(μ,ν):=supgBL1|Ygd(μν)|.

    According to [34], (Y,ρBL(,)) is a Polish space.

    Definition 2.1. [35] A continuous function g:RY is called almost periodic if for every ε>0, there is an (ε)>0 such that each interval with length has a point τ meeting

    ρ(g(t+τ),g(t))<ε,foralltR.

    We indicate by AP(R,Y) the set of all such functions.

    Let (X,) signify a separable Banach space. Denote by μ(X):=PX1 and E(X) the distribution and the expectation of X:(Ω,F,P)X, respectively.

    Let Lp(Ω,X) indicate the family of all X-valued random variables satisfying E(Xp)=ΩXpdP<.

    Definition 2.2. [21] A process Z:RLp(Ω,X) is called Lp-continuous if for any t0R,

    limtt0EZ(t)Z(t0)p=0.

    It is Lp-bounded if suptREZ(t)p<.

    For 1<p<, we denote by Lploc(R,X) the space of all functions from R to X which are locally p-integrable. For gLploc(R,X), we consider the following Stepanov norm:

    gSp=suptR(t+1tg(s)pds)1p.

    Definition 2.3. [35] A function gLploc(R,X) is called p-th Stepanov almost periodic if for any ε>0, it is possible to find a number >0 such that every interval with length has a number τ such that

    g(t+τ)g(t)Sp<ε.

    Definition 2.4. [9] A stochastic process ZLploc(R,Lp(Ω,X)) is said to be Sp-bounded if

    ZSps:=suptR(t+1tEZ(s)pds)1p<.

    Definition 2.5. [9] A stochastic process ZLloc(R,Lp(Ω,H)) is called Stepanov almost periodic in p-th mean if for any ε>0, it is possible to find a number >0 such that every interval with length has a number τ such that

    Z(t+τ)Z(t)Sps<ε.

    Definition 2.6. [9] A stochastic process Z:RLp(Ω,X)) is said to be p-th Stepanov almost periodic in the distribution sense if for each ε>0, it is possible to find a number >0 such that any interval with length has a number τ such that

    supaR(a+1adpBL(P[Z(t+τ)]1,P[Z(t)]1)dt)1p<ε.

    Lemma 2.1. [36] (Burkholder-Davis-Gundy inequality) If fL2(J,R), p>2, B(t) is Brownian motion, then

    E[suptJ|tt0f(s)dB(s)|p]CpE[Tt0|f(s)|2ds]p2,

    where cp=(pp+12(p1)p1)p2.

    The model that we consider in this paper is the following stochastic Clifford-valued SICNN with mixed delays:

    dxij(t)=[aij(t)xij(t)+CklNh1(i,j)Cklij(t)f(xkl(tτkl(t)))xij(t)+CklNh2(i,j)Bklij(t)0Kij(u)g(xkl(tu))duxij(t)+Lij(t)]dt+CklNh3(i,j)Eklij(t)δij(xij(tσij(t)))dωij(t), (2.1)

    where i=1,2,,m,j=1,2,,n, Cij(t) represents the cell at the (i,j) position, the h1-neighborhood Nh1(i,j) of Cij is given as:

    Nh1(i,j)={Ckl:max(|ki|,|lj|)h1,1km,1ln},

    Nh2(i,j),Nh3(i,j) are similarly defined, xij denotes the activity of the cell Cij, Lij(t):RA corresponds to the external input to Cij, the function aij(t):RA represents the decay rate of the cell activity, Cklij(t):RA,Bklij(t):RA and Eklij(t):RA signify the connection or coupling strength of postsynaptic activity of the cell transmitted to the cell Cij, and the activity functions f():AA, and g():AA are continuous functions representing the output or firing rate of the cell Ckl, and τkl(t),σij(t):RR+ are the transmission delay, the kernel Kij(t):RR is an integrable function, ωij(t) represents the Brownian motion defined on a complete probability space, δij():AA is a Borel measurable function.

    Let (Ω, F, {Ft}t0, P) be a complete probability space in which {Ft}t0 is a natural filtration meeting the usual conditions. Denote by BF0([θ,0],An) the family of bounded, F0-measurable and An-valued random variables from [θ,0]An. The initial values of system (2.1) are depicted as

    xi(s)=ϕi(s),s[θ,0],

    where ϕiBF0([θ,0],A),θ=max1i,jn{suptRτij(t),suptRσij(t)}.

    For convenience, we introduce the following notations:

    a_0=minijΛa_0ij=minijΛinftRa0ij(t),ˉa0=maxijΛˉa0ij=maxijΛsuptRa0ij(t),Cklij+=suptRCklij(t),¯ac=maxijΛˉacij=maxijΛsuptRacij(t),Bklij+=suptRBklij(t),Eklij+=suptREklij(t),K+ij=suptRKij(t),τ+kl=suptRτkl(t),˙τ+kl=suptR˙τkl(t),σ+ij=suptRσij(t),˙σ+ij=suptR˙σij(t),ML=maxijΛL+ij=maxijΛsuptRLij(t),θ=maxijΛ{τ+ij,σ+ij},Λ={11,12,,1n,,mn}.

    Throughout this paper, we make the following assumptions:

    (A1) For ijΛ, f,g,δijC(A,A) satisfy the Lipschitz condition, and f,g are bounded, that is, there exist constants Lf>0,Lg>0,Lδij>0,Mf>0,Mg>0 such that for all x,yA,

    ||f(x)f(y)||Lf||xy||,||g(x)g(y)||Lg||xy||,||δij(x)δij(y)||Lδij||xy||,||f(x)||Mf,||g(x)||Mg;

    furthermore, f(0)=g(0)=δij(0)=0.

    (A2) For ijΛ, a0ijAP(R,R+),acijAP(R,A),τij,σijAP(R,R+)C1(R,R) satisfying 1˙τ+ij,1˙σ+ij>0, Cklij,Bklij,EklijAP(R,A), L=(L11,L12,,Lmn)Lploc(R,Lp(Ω,Am×n)) is almost periodic in the sense of Stepanov.

    (A3) For p>2,1p+1q=1,

    0<r1:=8p4maxijΛ{(pqa_0ij)pqqpa_0ij[(ˉacij)p+(CklNh1(i,j)(Cklij+)q)pq(2κLf+Mf)p+(CklNh2(i,j)(Bklij+)q)pq((2κLg+Mg)0|Kij(u)|du)p]+Cp(p22a_0ij)p22qpa_0ij(CklNh3(i,j)(Eklij+)q)pq(Lδij)p}<1,

    and for p=2,

    0<r2:=16maxijΛ{1(a_0ij)2[(ˉacij)2+CklNh1(i,j)(Cklij+)2(2κLf+Mf)2+CklNh2(i,j)(Bklij+)2×((2κLg+Mg)0|Kij(u)|du)2]+12a_0ijCklNh3(i,j)(Eklij+)2(Lδij)2}<1.

    (A4) For 1p+1q=1,

    0<qpa_0ρ1:=16p1qpa_0maxijΛ{(pqa_0ij)pq[(ˉacij)p+(CklNh1(i,j)(Cklij+)q)pq[2p1(Lf)p×CklNh1(i,j)epqa_0ijτkl+(2κ)p1˙τ+kl+(Mf)p]+(CklNh2(i,j)(Bklij+)q)pq[(2κLg×0|Kij(u)|du)p+(Mg0|Kij(u)|du)p]]+2p1Cp(p22a_0ij)p22×(CklNh3(i,j)(Eklij+)q)pq(Lδij)pepqa_0ijσ+ij1˙σ+ij}<1,(p>2),
    0<ρ2a_0:=32a_0maxijΛ{(1a_0ij)CklNh1(i,j)(Cklij+)2[(Lf)2CklNh1(i,j)ea_0ijτkl+(2κ)21˙τ+kl+(Mf)22]+CklNh3(i,j)(Eklij+)2(Lδij)2e2a_0ijσ+ij1˙σ+ij+12a_0ijCklNh2(i,j)(Bklij+)2(4κ2L2g+M2g)×(0|Kij(u)|du)2+(ˉacij)22a_0ij}<1,(p=2).

    (A5) The kernel Kij is almost periodic and there exist constants M>0 and u>0 such that |Kij(t)|Meut for all tR.

    Let X indicate the space of all Lp-bounded and Lp-uniformly continuous stochastic processes from R to Lp(Ω,Am×n), then with the norm ϕX=suptR{Eϕ(t)p0}1p, where ϕ=(ϕ11,ϕ12,,ϕmn)X, it is a Banach space.

    Set ϕ0=(ϕ011,ϕ012,,ϕ0mn)T, where ϕ0ij(t)=tetsa0ij(u)duLij(s)ds,tR,ijΛ. Then, ϕ0 is well defined under assumption (A2). Consequently, we can take a constant κ such that κϕ0X.

    Definition 3.1. [37] An Ft-progressively measurable stochastic process x(t)=(x11(t),x12(t),,xmn(t))T is called a solution of system (2.1), if x(t) solves the following integral equation:

    xij(t)=xij(t0)ett0a0ij(u)du+tt0etsa0ij(u)du[acij(s)xij(s)+CklNh1(i,j)Cklij(s)×f(xkl(sτkl(s)))xij(s)+CklNh2(i,j)Bklij(s)0Kij(u)g(x(su))duxij(s)+Lij(s)]ds+tt0etsa0ij(u)duCklNh3(i,j)Eklij(s)δij(xij(sσij(s)))dwij(s). (3.1)

    In (3.1), let t0, then one gets

    xij(t)=tetsa0ij(u)du[acij(s)xij(s)+CklNh1(i,j)Cklij(s)f(xkl(sτkl(s)))xij(s)+CklNh2(i,j)Bklij(s)0Kij(u)g(x(su))duxij(s)+Lij(s)]ds+tetsa0ij(u)du×CklNh3(i,j)Eklij(s)δij(xij(sσij(s)))dwij(s),tt0,ijΛ. (3.2)

    It is easy to see that if x(t) solves (3.2), then it also solves (2.1).

    Theorem 3.1. Assume that (A1)(A4) hold. Then the system (2.1) has a unique Lp-bounded and Lp-uniformly continuous solution in X={ϕX:ϕϕ0Xκ}, where κ is a constant satisfying κϕ0X.

    Proof. Define an operator ϕ:XX as follows:

    (Ψϕ)(t)=((Ψ11ϕ)(t),(Ψ12ϕ)(t),,(Ψmnϕ)(t))T,

    where (ϕ11,ϕ12,,ϕmn)TX, tR and

    (Ψijϕ)(t)=tetsa0ij(u)du[acij(s)ϕij(s)+CklNh1(i,j)Cklij(s)f(ϕkl(sτkl(s)))ϕij(s)+CklNh2(i,j)Bklij(s)0Kij(u)g(ϕkl(su))duϕij(s)+Lij(s)]ds+tetsa0ij(u)duCklNh3(i,j)Eklij(s)δij(ϕij(sσij(s)))dωij(s),ijΛ. (3.3)

    First of all, let us show that EΨϕ(t)ϕ0(t)p0κ for all ϕX.

    Noticing that for any ϕX, it holds

    ϕXϕ0X+ϕϕ0X2κ.

    Then, we deduce that

    EΨϕ(t)ϕ0(t)p04p1maxijΛ{Etetsa0ij(u)duacij(s)ϕij(s)p}+4p1maxijΛ{Etetsa0ij(u)du×CklNh1(i,j)Cklij(s)f(ϕkl(sτkl(s)))ϕij(s)dsp}+4p1maxijΛ{Etetsa0ij(u)du×CklNh2(i,j)Bklij(s)0Kij(u)g(ϕkl(su))duϕij(s)dsp}+4p1maxijΛ{Etetsa0ij(u)duCklNh3(i,j)Eklij(s)δij(ϕij(sσij(s)))dωij(s)p}:=F1+F2+F3+F4. (3.4)

    By the Hölder inequality, we have

    F24p1maxijΛ{E[teqptsa0ij(u)duds]pq[tepqtsa0ij(u)du×(CklNh1(i,j)Cklij(s)f(ϕkl(sτkl(s)))ϕij(s))pds]}4p1maxijΛ{(pqa_0ij)pqE[tepqtsa0ij(u)du(CklNh1(i,j)(Cklij(s))q)pq×ijΛ(2κLf)pϕij(s)pds]}4p1maxijΛ{(pqa_0ij)pqqpa_0ij(CklNh1(i,j)(Cklij+)q)pq(2κLf)p}ϕpX. (3.5)

    Similarly, one has

    F14p1maxijΛ{(pqa_0ij)pqqpa_0ij(ˉacij)p}ϕpX, (3.6)
    F34p1maxijΛ{(pqa_0ij)pqqpa_0ij(CklNh2(i,j)(Bklij+)q)pq(2κLg0|Kij(u)|du)p}ϕpX. (3.7)

    By the Burkolder-Davis-Gundy inequality and the Hölder inequality, when p>2, we infer that

    F44p1CpmaxijΛ{E[tetsa0ij(u)duCklNh3(i,j)Eklij(s)δij(ϕij(sσij(s)))2ds]p2}4p1CpmaxijΛ{E[e2tsa0ij(u)duCklNh3(i,j)Eklijδij(ϕij(sσij(s)))2ds]p2}4p1CpmaxijΛ{E[t(e2tsa0ij(u)du)pp2×1pds]p2p×p2×E[t(e2tsa0ij(u)du)1q×p2(CklNh3(i,j)Eklij(s)δijϕij(sσij(s))2)p2ds]}4p1CpmaxijΛ{(p22a_0ij)p22qpa_0ijECklNh3(i,j)Eklij(s)δij(ϕij(sσij(s)))p}4p1CpmaxijΛ{(p22a_0ij)p22qpa_0ij(CklNh3(i,j)(Eklij+)q)pq(Lδij)p}ϕpX. (3.8)

    When p=2, by the Itˆo isometry, it follows that

    F44maxijΛ{E[te2tsa0ij(u)duCklNh3(i,j)Eklij(s)δij(ϕij(sσij(s)))2Ads]}4maxijΛ{12a_0ijCklNh3(i,j)(Eklij+)2(Lδij)2}ϕ2X. (3.9)

    Putting (3.5)–(3.9) into (3.4), we obtain that

    Ψϕϕ0pX4p1maxijΛ{(pqa_0ij)pqqpa_0ij[(ˉacij)p+(CklNh1(i,j)(Cklij+)q)pq(2κLf)p+(CklNh2(i,j)(Bklij+)q)pq(2κLg0|Kij(u)|du)p]+Cp(p22a_0ij)p22qpa_0ij(CklNh3(i,j)(Eklij+)q)pq(Lδij)p}ϕpXκp,(p>2), (3.10)

    and

    Ψϕϕ02X4maxijΛ{1(aij)2[(ˉacij)2+CklNh1(i,j)(Cklij+)2(2κLf)2+CklNh2(i,j)(Bklij+)2(2κLg×0|Kij(u)|du)2]+12a_0ijCklNh3(i,j)(Eklij+)2(Lδij)2}ϕ2Xκ2,(p=2). (3.11)

    It follows from (3.10), (3.11) and (A3) that Ψϕϕ0Xκ.

    Then, using the same method as that in the proof of Theorem 3.2 in [21], we can show that Ψϕ is Lp-uniformly continuous. Therefore, we have Ψ(X)X.

    Last, we will show that Ψ is a contraction mapping. Indeed, for any ψ,φX, when p>2, we have

    E(Φφ)(t)(Φψ)(t)p04p1maxijΛ{Etetsa0ij(u)du(acij(s)φij(s)+acij(s)ψij(s))dsp}+4p1maxijΛ{Etetsa0ij(u)duCklNh1(i,j)Cklij(s)[f(φkl(sτkl(s)))φij(s)f(ψkl(sτkl(s)))ψij(s)]dsp}+4p1maxijΛ{Etetsa0ij(u)duCklNh2(i,j)Bklij(s)×[0Kij(u)g(φkl(su))duφij(s)0Kij(u)g(ψkl(su))duψij(u)]dsp}+4p1maxijΛ{Etetsa0ij(u)duCklNh3(i,j)Eklij(s)[δij(φij(sσij(s)))δij(ψij(sσij(s)))]dωij(s)p}4p1maxijΛ{(pqa_0ij)pqqpa_0ij[(ˉacij)p+(CklNh1(i,j)(Cklij+)q)pq(2κLf+Mf)p+(CklNh2(i,j)(Bklij+)q)pq((2κLg+Mg)0|Kij(u)|du)p]+Cp(p22a_0ij)p22qpa_0ij×(CklNh3(i,j)(Eklij+)q)pq(Lδij)p}φψpX. (3.12)

    Similarly, for , we can get

    (3.13)

    From (3.12) and (3.13) it follows that

    Hence, by virtue of , is a contraction mapping. So, has a unique fixed point in , i.e., (2.1) has a unique solution in .

    Theorem 3.2. Assume that hold. Then the system (2.1) has a unique -th Stepanov-like almost periodic solution in the distribution sense in , where is a constant satisfying .

    Proof. From Theorem 3.1, we know that (2.1) has a unique solution in . Now, let us show that is Stepanov-like almost periodic in distribution. Since , it is -uniformly continuous and satisfies . So, for any , there exists , when , we have . Hence, we derive that

    (3.14)

    For the above, according to , we have, for ,

    As , by (3.14), there holds

    Based on (3.2), we can infer that

    in which is a Brownian motion having the same distribution as .

    Let us consider the process

    (3.15)

    From (3.2) and (3.15), we deduce that

    (3.16)

    Employing the Hölder inequality, we can obtain

    By a change of variables and Fubini's theorem, we infer that

    (3.17)

    where

    Similarly, when , one can obtain

    (3.18)

    where

    (3.19)

    and when , we have

    (3.20)

    where

    (3.21)

    In the same way, we can get

    (3.22)
    (3.23)
    (3.24)
    (3.25)
    (3.26)
    (3.27)
    (3.28)
    (3.29)

    Noting that

    (3.30)

    We can gain

    (3.31)
    (3.32)
    (3.33)
    (3.34)

    when , we have

    (3.35)

    for , we get

    (3.36)

    Substituting (3.17)–(3.36) into (3.16), we have the following two cases:

    Case 1. When , we have

    where is the same as that in and

    By , we know . Hence, we derive that

    (3.37)

    Case 2. When , we can obtain

    where is defined in and

    Similar to the previous case, by , we know and hence, we can get that

    (3.38)

    Noting that

    Hence, we have

    (3.39)

    Combining (3.37)–(3.39), we can conclude that is -th Stepanov almost periodic in the distribution sense. The proof is complete.

    Similar to the proof of Theorem 3.7 in [21], one can easily show that.

    Theorem 3.3. Suppose that are fulfilled and let be the Stepanov almost periodic solution in the distribution sense of system (2.1) with initial value . Then there exist constants and such that for an arbitrary solution with initial value satisfies

    where , i.e., the solution is globally exponentially stable.

    The purpose of this section is to demonstrate the effectiveness of the results obtained in this paper through a numerical example.

    In neural network (2.1), choose , , and

    and let . Then we get

    Take , , then we have

    And when , we have

    Thus, all assumptions in Theorems 3.2 and 3.3 are fulfilled. So we can conclude that the system (2.1) has a unique -almost periodic solution in the distribution sense which is globally exponentially stable.

    The results are also verified by the numerical simulations in Figures 14.

    Figure 1.  Global exponential stability of states and of (2.1).
    Figure 2.  Global exponential stability of states and of (2.1).
    Figure 3.  Global exponential stability of states and of (2.1).
    Figure 4.  Global exponential stability of states and of (2.1).

    From these figures, we can observe that when the four primitive components of each solution of this system take different initial values, they eventually tend to stabilize. It can be seen that these solutions that meet the above conditions do exist and are exponentially stable.

    In this article, we establish the existence and global exponential stability of Stepanov almost periodic solutions in the distribution sense for a class of stochastic Clifford-valued SICNNs with mixed delays. Even when network (2.1) degenerates into a real-valued NN, the results of this paper are new. In fact, uncertainty, namely fuzziness, is also a problem that needs to be considered in real system modeling. However, we consider only the disturbance of random factors and do not consider the issue of fuzziness. In a NN, considering the effects of both random perturbations and fuzziness is our future direction of effort.

    The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work is supported by the National Natural Science Foundation of China under Grant No. 12261098.

    The authors declare that they have no conflicts of interest.



    [1] W. Zhou, H. Li, Q. Tian, Recent advance in content-based image retrieval: a literature survey, preprint, arXiv: 1706.06064.
    [2] J. H. Friedman, J. L. Bentley, R. A. Finkel, An algorithm for finding best matches in logarithmic expected time, ACM Trans. Math. Software, 3 (1977), 209–226. https://doi.org/10.1145/355744.355745 doi: 10.1145/355744.355745
    [3] A. Gionis, P. Indyk, R. Motwani, Similarity search in high dimensions via hashing, in International Conference on Very Large Data Bases, 99 (1999), 518–529. Available from: https://www.cs.princeton.edu/courses/archive/spring13/cos598C/Gionis.pdf.
    [4] Y. Gong, S. Lazebnik, A. Gordo, F. Perronnin, Iterative quantization: a procrustean approach to learning binary codes for large-scale image retrieval, IEEE Trans. Pattern Anal. Mach. Intell., 35 (2012), 2916–2929. https://doi.org/10.1109/TPAMI.2012.193 doi: 10.1109/TPAMI.2012.193
    [5] W. J. Li, S. Wang, W. C. Kang, Feature learning based deep supervised hashing with pairwise labels, preprint, arXiv: 1511.03855.
    [6] Y. Weiss, A. Torralba, R. Fergus, Spectral hashing, in Advances in Neural Information Processing Systems, 21 (2008), 1753–1760. Available from: https://proceedings.neurips.cc/paper_files/paper/2008/file/d58072be2820e8682c0a27c0518e805e-Paper.pdf.
    [7] W. Liu, J. Wang, S. Kumar, S. F. Chang, Hashing with graphs, in Proceedings of the 28 th International Conference on Machine Learning, (2011), 1–8. Available from: https://storage.googleapis.com/pub-tools-public-publication-data/pdf/37599.pdf.
    [8] W. Liu, J. Wang, R. Ji, Y. G. Jiang, S. F. Chang, Supervised hashing with kernels, in 2012 IEEE Conference on Computer Vision and Pattern Recognition, (2012), 2074–2081. https://doi.org/10.1109/CVPR.2012.6247912
    [9] F. Shen, C. Shen, W. Liu, H. T. Shen, Supervised discrete hashing, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2015), 37–45.
    [10] W. C. Kang, W. J. Li, Z. H. Zhou, Column sampling based discrete supervised hashing, in Proceedings of the AAAI Conference on Artificial Intelligence, 30 (2016), 1230–1236. https://doi.org/10.1609/aaai.v30i1.10176
    [11] Z. Cao, M. Long, J. Wang, P. S. Yu, Hashnet: deep learning to hash by continuation, in Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2017), 5608–5617.
    [12] H. Zhu, M. Long, J. Wang, Y. Cao, Deep hashing network for efficient similarity retrieval, in Proceedings of the AAAI Conference on Artificial Intelligence, 30 (2016), 2415–2421. https://doi.org/10.1609/aaai.v30i1.10235
    [13] H. Liu, R. Wang, S. Shan, X. Chen, Deep supervised hashing for fast image retrieval, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 2064–2072.
    [14] G. Irie, H. Arai, Y. Taniguchi, Alternating co-quantization for cross-modal hashing, in Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2015), 1886–1894.
    [15] M. Long, Y. Cao, J. Wang, P. S. Yu, Composite correlation quantization for efficient multimodal retrieval, in Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, (2016), 579–588. https://doi.org/10.1145/2911451.2911493
    [16] Y. Cao, M. Long, J. Wang, S. Liu, Deep visual-semantic quantization for efficient image retrieval, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 1328–1337.
    [17] Y. Cao, M. Long, J. Wang, S. Liu, Collective deep quantization for efficient cross-modal retrieval, in Thirty-First AAAI Conference on Artificial Intelligence, 31 (2017), 3974–3980. https://doi.org/10.1609/aaai.v31i1.11218
    [18] E. Yang, C. Deng, C. Li, W. Liu, J. Li, D. Tao, Shared predictive cross-modal deep quantization, IEEE Trans. Neural Networks Learn. Syst., 29 (2018), 5292–5303. https://doi.org/10.1109/TNNLS.2018.2793863 doi: 10.1109/TNNLS.2018.2793863
    [19] Y. Fu, T. Xiang, Y. Jiang, X. Xue, L. Sigal, S. Gong, Recent advances in zero-shot recognition: toward data-efficient understanding of visual content, IEEE Signal Process Mag., 35 (2017), 112–125. https://doi.org/10.1109/MSP.2017.2763441 doi: 10.1109/MSP.2017.2763441
    [20] L. Zhang, T. Xiang, S. Gong, Learning a deep embedding model for zero-shot learning, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 2021–2030.
    [21] Y. Li, Z. Jia, J. Zhang, K. Huang, T. Tan, Deep semantic structural constraints for zero-shot learning, in Proceedings of the AAAI Conference on Artificial Intelligence, 32 (2018), 7049–7056. https://doi.org/10.1609/aaai.v32i1.12244
    [22] A. Farhadi, I. Endres, D. Hoiem, D. A. Forsyth, Describing objects by their attributes, in 2009 IEEE Conference on Computer Vision and Pattern Recognition, (2009), 1778–1785. https://doi.org/10.1109/CVPR.2009.5206772
    [23] T. Mikolov, K. Chen, G. Corrado, J. Dean, Efficient estimation of word representations in vector space, preprint, arXiv: 1301.3781.
    [24] G. A. Miller, Wordnet: a lexical database for English, Commun. ACM, 38 (1995), 39–41. https://doi.org/10.1145/219717.219748 doi: 10.1145/219717.219748
    [25] Y. Guo, G. Ding, J. Han, Y. Gao, Sitnet: discrete similarity transfer network for zero-shot hashing, in Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17), (2017), 1767–1773. Available from: https://www.ijcai.org/proceedings/2017/0245.pdf.
    [26] Y. Yang, Y. Luo, W. Chen, F. Shen, J. Shao, H. T. Shen, Zero-shot hashing via transferring supervised knowledge, in Proceedings of the 24th ACM International Conference on Multimedia, (2016), 1286–1295. https://doi.org/10.1145/2964284.2964319
    [27] Y. Xu, Y. Yang, F. Shen, X. Xu, Y. Zhou, H. T. Shen, Attribute hashing for zero-shot image retrieval, in 2017 IEEE International Conference on Multimedia and Expo (ICME), (2017), 133–138. https://doi.org/10.1109/ICME.2017.8019425
    [28] H. Jiang, R. Wang, S. Shan, X. Chen, Learning class prototypes via structure alignment for zero-shot recognition, in Computer Vision – ECCV 2018, (2018), 121–138. https://doi.org/10.1007/978-3-030-01249-6_8
    [29] Q. Li, Z. Sun, R. He, T. Tan, Deep supervised discrete hashing, in Advances in Neural Information Processing Systems, 30 (2017), 2479–2488. Available from: https://proceedings.neurips.cc/paper_files/paper/2017/file/e94f63f579e05cb49c05c2d050ead9c0-Paper.pdf.
    [30] Y. Cao, M. Long, J. Wang, Correlation hashing network for efficient cross-modal retrieval, preprint, arXiv: 1602.06697.
    [31] T. Ge, K. He, Q. Ke, J. Sun, Optimized product quantization, IEEE Trans. Pattern Anal. Mach. Intell., 36 (2013), 744–755. https://doi.org/10.1109/TPAMI.2013.240 doi: 10.1109/TPAMI.2013.240
    [32] A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, Commun. ACM, (2017), 84–90. https://doi.org/10.1145/3065386 doi: 10.1145/3065386
    [33] Y. Liu, H. Li, X. Wang, Rethinking feature discrimination and polymerization for large-scale recognition, preprint, arXiv: 1710.00870.
    [34] A. Lazaridou, G. Dinu, M. Baroni, Hubness and pollution: delving into cross-space mapping for zero-shot learning, in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, 1 (2015), 270–280. https://doi.org/10.3115/v1/P15-1027
    [35] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, et al., Automatic differentiation in pytorch, 2017. Available from: https://openreview.net/forum?id = BJJsrmfCZ.
    [36] J. Besag, On the statistical analysis of dirty pictures, J. R. Stat. Soc., 48 (1986), 48–259. https://doi.org/10.1111/j.2517-6161.1986.tb01412.x doi: 10.1111/j.2517-6161.1986.tb01412.x
    [37] C. H. Lampert, H. Nickisch, S. Harmeling, Attribute-based classification for zero-shot visual object categorization, IEEE Trans. Pattern Anal. Mach. Intell., 36 (2013), 453–465. https://doi.org/10.1109/TPAMI.2013.140 doi: 10.1109/TPAMI.2013.140
    [38] J. Deng, W. Dong, R. Socher, L. Li, K. Li, F. Li, Imagenet: a large-scale hierarchical image database, in 2009 IEEE Conference on Computer Vision and Pattern Recognition, (2009), 248–255. https://doi.org/10.1109/CVPR.2009.5206848
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1380) PDF downloads(47) Cited by(1)

Figures and Tables

Figures(2)  /  Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog