
Approximate nearest neighbor (ANN) search has become an essential paradigm for large-scale image retrieval. Conventional ANN search requires the categories of query images to been seen in the training set. However, facing the rapid evolution of newly-emerging concepts on the web, it is too expensive to retrain the model via collecting labeled data with the new (unseen) concepts. Existing zero-shot hashing methods choose the semantic space or intermediate space as the embedding space, which ignore the inconsistency of visual space and semantic space and suffer from the hubness problem on the zero-shot image retrieval task. In this paper, we present an novel deep quantization network with visual-semantic alignment for efficient zero-shot image retrieval. Specifically, we adopt a multi-task architecture that is capable of 1) learning discriminative and polymeric image representations for facilitating the visual-semantic alignment; 2) learning discriminative semantic embeddings for knowledge transfer; and 3) learning compact binary codes for aligning the visual space and the semantic space. We compare the proposed method with several state-of-the-art methods on several benchmark datasets, and the experimental results validate the superiority of the proposed method.
Citation: Huixia Liu, Zhihong Qin. Deep quantization network with visual-semantic alignment for zero-shot image retrieval[J]. Electronic Research Archive, 2023, 31(7): 4232-4247. doi: 10.3934/era.2023215
[1] | Nina Huo, Bing Li, Yongkun Li . Global exponential stability and existence of almost periodic solutions in distribution for Clifford-valued stochastic high-order Hopfield neural networks with time-varying delays. AIMS Mathematics, 2022, 7(3): 3653-3679. doi: 10.3934/math.2022202 |
[2] | Yongkun Li, Xiaoli Huang, Xiaohui Wang . Weyl almost periodic solutions for quaternion-valued shunting inhibitory cellular neural networks with time-varying delays. AIMS Mathematics, 2022, 7(4): 4861-4886. doi: 10.3934/math.2022271 |
[3] | Ardak Kashkynbayev, Moldir Koptileuova, Alfarabi Issakhanov, Jinde Cao . Almost periodic solutions of fuzzy shunting inhibitory CNNs with delays. AIMS Mathematics, 2022, 7(7): 11813-11828. doi: 10.3934/math.2022659 |
[4] | Yuwei Cao, Bing Li . Existence and global exponential stability of compact almost automorphic solutions for Clifford-valued high-order Hopfield neutral neural networks with D operator. AIMS Mathematics, 2022, 7(4): 6182-6203. doi: 10.3934/math.2022344 |
[5] | Jin Gao, Lihua Dai . Weighted pseudo almost periodic solutions of octonion-valued neural networks with mixed time-varying delays and leakage delays. AIMS Mathematics, 2023, 8(6): 14867-14893. doi: 10.3934/math.2023760 |
[6] | Hedi Yang . Weighted pseudo almost periodicity on neutral type CNNs involving multi-proportional delays and D operator. AIMS Mathematics, 2021, 6(2): 1865-1879. doi: 10.3934/math.2021113 |
[7] | Yanshou Dong, Junfang Zhao, Xu Miao, Ming Kang . Piecewise pseudo almost periodic solutions of interval general BAM neural networks with mixed time-varying delays and impulsive perturbations. AIMS Mathematics, 2023, 8(9): 21828-21855. doi: 10.3934/math.20231113 |
[8] | Zhangir Nuriyev, Alfarabi Issakhanov, Jürgen Kurths, Ardak Kashkynbayev . Finite-time synchronization for fuzzy shunting inhibitory cellular neural networks. AIMS Mathematics, 2024, 9(5): 12751-12777. doi: 10.3934/math.2024623 |
[9] | Abdulaziz M. Alanazi, R. Sriraman, R. Gurusamy, S. Athithan, P. Vignesh, Zaid Bassfar, Adel R. Alharbi, Amer Aljaedi . System decomposition method-based global stability criteria for T-S fuzzy Clifford-valued delayed neural networks with impulses and leakage term. AIMS Mathematics, 2023, 8(7): 15166-15188. doi: 10.3934/math.2023774 |
[10] | Xiaofang Meng, Yongkun Li . Pseudo almost periodic solutions for quaternion-valued high-order Hopfield neural networks with time-varying delays and leakage delays on time scales. AIMS Mathematics, 2021, 6(9): 10070-10091. doi: 10.3934/math.2021585 |
Approximate nearest neighbor (ANN) search has become an essential paradigm for large-scale image retrieval. Conventional ANN search requires the categories of query images to been seen in the training set. However, facing the rapid evolution of newly-emerging concepts on the web, it is too expensive to retrain the model via collecting labeled data with the new (unseen) concepts. Existing zero-shot hashing methods choose the semantic space or intermediate space as the embedding space, which ignore the inconsistency of visual space and semantic space and suffer from the hubness problem on the zero-shot image retrieval task. In this paper, we present an novel deep quantization network with visual-semantic alignment for efficient zero-shot image retrieval. Specifically, we adopt a multi-task architecture that is capable of 1) learning discriminative and polymeric image representations for facilitating the visual-semantic alignment; 2) learning discriminative semantic embeddings for knowledge transfer; and 3) learning compact binary codes for aligning the visual space and the semantic space. We compare the proposed method with several state-of-the-art methods on several benchmark datasets, and the experimental results validate the superiority of the proposed method.
As stated in [1], a nervous system in the real world, synaptic transmission is a noisy process caused by random fluctuations in neurotransmitter release and other probabilistic factors. Therefore, it is necessary to consider stochastic neural networks (NNs) because random inputs may change the dynamics of the (NN) [2,3,4,5].
SICNNs, which were proposed in [6], have attracted the interest of many scholars since their introduction due to their special roles in psychophysics, robotics, adaptive pattern recognition, vision, and image processing. In the above applications, their dynamics play an important role. Thereupon, their various dynamics have been extensively studied (see [7,8,9,10,11,12,13] and references therein). However, there is limited research on the dynamics of stochastic SICNNs. Therefore, it is necessary to further study the dynamics of such NNs.
On the one hand, research on the dynamics of NNs that take values from a non commutative algebra, such as quaternion-valued NNs [14,15,16], octonion-valued NNs [17,18,19,20], and Clifford-valued NNs [21,22,23], has gained the interest of many researchers because such neural networks can include typical real-valued NNs as their special cases, and they have superior multi-dimensional signal processing and data storage capabilities compared to real-valued NNs. It is worth mentioning that in recent years, many authors have conducted extensive research on various dynamics of Clifford-valued NNs, such as the existence, multiplicity and stability of equilibrium points, and the existence, multiplicity and stability of almost periodic solutions as well as the synchronization problems [22,23,24,25,26,27,28,29,30]. However, most of the existing results for the dynamics of Clifford-valued NNs has been obtained through decomposition methods [24,25,26,27]. However, the results obtained by decomposition methods are generally not convenient for direct application, and there is little research on Clifford-valued NNs using non decomposition methods [28,29,30]. Therefore, further exploration of using non decomposition methods to study the dynamics of Clifford-valued NNs has important theoretical significance and application value.
On the other hand, Bohr's almost periodicity is a special case of Stepanov's almost periodicity, but there is little research on the Stepanov periodic oscillations of NNs [19,31,32,33], especially the results of Stepanov's almost periodic solutions of stochastic SICNNs with discrete and infinitely distributed delays have not been published yet.
Motivated by the discussion above, our purpose of this article is to establish the existence and global exponential stability of Stepanov almost periodic solutions in the distribution sense for a stochastic Clifford-valued SICNN with mixed delays via non decomposition methods.
The subsequent sections of this article are organized as follows. Section 2 introduces some concepts, notations, and basic lemmas and gives a model description. Section 3 discusses the existence and stability of Stepanov almost periodic solutions in the distribution sense of the NN under consideration. An example is provided in Section 4. Finally, Section 5 provides a brief conclusion.
Let A={∑ϑ∈Pxϑeϑ,xϑ∈R} be a real Clifford-algebra with N generators e∅=e0=1, and eh,h=1,2,⋯,N, where P={∅,0,1,2,⋯,ϑ,⋯,12⋯N}, e2i=1,i=1,2,⋯,r,e2i=−1,i=r+1,r+2,⋯,m,eiej+ejei=0,i≠j and i,j=1,2,⋯,N. For x=∑ϑ∈Pxϑeϑ∈A, we indicate ‖x‖♭=maxϑ∈P{|xϑ|},xc=∑ϑ≠∅xϑeϑ,x0=x−xc, and for x=(x11,x12,…,x1n,x21,x22,…,x2n,…,xmn)T∈Am×n, we denote ‖x‖0=max{‖xij‖♭,1≤i≤m,1≤j≤n}. The derivative of x(t)=∑ϑ∈Pxϑ(t)eϑ is defined by ˙x(t)=∑ϑ∈P˙xϑ(t)eϑ and the integral of x(t)=∑ϑ∈Pxϑ(t)eϑ over the interval [a,b] is defined by ∫bax(t)dt=∑ϑ∈P(∫baxϑ(t)dt)eϑ.
Let (Y,ρ) be a separable metric space and P(Y) the collection of all probability measures defined on Borel σ-algebra of Y. Denote by Cb(Y) the set of continuous functions f:Y→R with ‖g‖∞:=supx∈Y{|g(x)|}<∞.
For g∈Cb(Y), μ,ν∈P(Y), let us define
‖g‖L=supx≠y|g(x)−g(y)|ρ(x,y),‖g‖BL=max{‖g‖∞,‖g‖L}, |
ρBL(μ,ν):=sup‖g‖BL≤1|∫Ygd(μ−ν)|. |
According to [34], (Y,ρBL(⋅,⋅)) is a Polish space.
Definition 2.1. [35] A continuous function g:R→Y is called almost periodic if for every ε>0, there is an ℓ(ε)>0 such that each interval with length ℓ has a point τ meeting
ρ(g(t+τ),g(t))<ε,forallt∈R. |
We indicate by AP(R,Y) the set of all such functions.
Let (X,‖⋅‖) signify a separable Banach space. Denote by μ(X):=P∘X−1 and E(X) the distribution and the expectation of X:(Ω,F,P)→X, respectively.
Let Lp(Ω,X) indicate the family of all X-valued random variables satisfying E(‖X‖p)=∫Ω‖X‖pdP<∞.
Definition 2.2. [21] A process Z:R→Lp(Ω,X) is called Lp-continuous if for any t0∈R,
limt→t0E‖Z(t)−Z(t0)‖p=0. |
It is Lp-bounded if supt∈RE‖Z(t)‖p<∞.
For 1<p<∞, we denote by Lploc(R,X) the space of all functions from R to X which are locally p-integrable. For g∈Lploc(R,X), we consider the following Stepanov norm:
‖g‖Sp=supt∈R(∫t+1t‖g(s)‖pds)1p. |
Definition 2.3. [35] A function g∈Lploc(R,X) is called p-th Stepanov almost periodic if for any ε>0, it is possible to find a number ℓ>0 such that every interval with length ℓ has a number τ such that
‖g(t+τ)−g(t)‖Sp<ε. |
Definition 2.4. [9] A stochastic process Z∈Lploc(R,Lp(Ω,X)) is said to be Sp-bounded if
‖Z‖Sps:=supt∈R(∫t+1tE‖Z(s)‖pds)1p<∞. |
Definition 2.5. [9] A stochastic process Z∈Lloc(R,Lp(Ω,H)) is called Stepanov almost periodic in p-th mean if for any ε>0, it is possible to find a number ℓ>0 such that every interval with length ℓ has a number τ such that
‖Z(t+τ)−Z(t)‖Sps<ε. |
Definition 2.6. [9] A stochastic process Z:R→Lp(Ω,X)) is said to be p-th Stepanov almost periodic in the distribution sense if for each ε>0, it is possible to find a number ℓ>0 such that any interval with length ℓ has a number τ such that
supa∈R(∫a+1adpBL(P∘[Z(t+τ)]−1,P∘[Z(t)]−1)dt)1p<ε. |
Lemma 2.1. [36] (Burkholder-Davis-Gundy inequality) If f∈L2(J,R), p>2, B(t) is Brownian motion, then
E[supt∈J|∫tt0f(s)dB(s)|p]≤CpE[∫Tt0|f(s)|2ds]p2, |
where cp=(pp+12(p−1)p−1)p2.
The model that we consider in this paper is the following stochastic Clifford-valued SICNN with mixed delays:
dxij(t)=[−aij(t)xij(t)+∑Ckl∈Nh1(i,j)Cklij(t)f(xkl(t−τkl(t)))xij(t)+∑Ckl∈Nh2(i,j)Bklij(t)∫∞0Kij(u)g(xkl(t−u))duxij(t)+Lij(t)]dt+∑Ckl∈Nh3(i,j)Eklij(t)δij(xij(t−σij(t)))dωij(t), | (2.1) |
where i=1,2,⋯,m,j=1,2,⋯,n, Cij(t) represents the cell at the (i,j) position, the h1-neighborhood Nh1(i,j) of Cij is given as:
Nh1(i,j)={Ckl:max(|k−i|,|l−j|)≤h1,1≤k≤m,1≤l≤n}, |
Nh2(i,j),Nh3(i,j) are similarly defined, xij denotes the activity of the cell Cij, Lij(t):R→A corresponds to the external input to Cij, the function aij(t):R→A represents the decay rate of the cell activity, Cklij(t):R→A,Bklij(t):R→A and Eklij(t):R→A signify the connection or coupling strength of postsynaptic activity of the cell transmitted to the cell Cij, and the activity functions f(⋅):A→A, and g(⋅):A→A are continuous functions representing the output or firing rate of the cell Ckl, and τkl(t),σij(t):R→R+ are the transmission delay, the kernel Kij(t):R→R is an integrable function, ωij(t) represents the Brownian motion defined on a complete probability space, δij(⋅):A→A is a Borel measurable function.
Let (Ω, F, {Ft}t⩾0, P) be a complete probability space in which {Ft}t⩾0 is a natural filtration meeting the usual conditions. Denote by BF0([−θ,0],An) the family of bounded, F0-measurable and An-valued random variables from [−θ,0]→An. The initial values of system (2.1) are depicted as
xi(s)=ϕi(s),s∈[−θ,0], |
where ϕi∈BF0([−θ,0],A),θ=max1≤i,j≤n{supt∈Rτij(t),supt∈Rσij(t)}.
For convenience, we introduce the following notations:
a_0=minij∈Λa_0ij=minij∈Λinft∈Ra0ij(t),ˉa0=maxij∈Λˉa0ij=maxij∈Λsupt∈Ra0ij(t),Cklij+=supt∈R‖Cklij(t)‖♭,¯ac=maxij∈Λˉacij=maxij∈Λsupt∈R‖acij(t)‖♭,Bklij+=supt∈R‖Bklij(t)‖♭,Eklij+=supt∈R‖Eklij(t)‖♭,K+ij=supt∈RKij(t),τ+kl=supt∈Rτkl(t),˙τ+kl=supt∈R˙τkl(t),σ+ij=supt∈Rσij(t),˙σ+ij=supt∈R˙σij(t),ML=maxij∈ΛL+ij=maxij∈Λsupt∈R‖Lij(t)‖♭,θ=maxij∈Λ{τ+ij,σ+ij},Λ={11,12,⋯,1n,⋯,mn}. |
Throughout this paper, we make the following assumptions:
(A1) For ij∈Λ, f,g,δij∈C(A,A) satisfy the Lipschitz condition, and f,g are bounded, that is, there exist constants Lf>0,Lg>0,Lδij>0,Mf>0,Mg>0 such that for all x,y∈A,
||f(x)−f(y)||♭≤Lf||x−y||♭,||g(x)−g(y)||♭≤Lg||x−y||♭,||δij(x)−δij(y)||♭≤Lδij||x−y||♭,||f(x)||♭≤Mf,||g(x)||♭≤Mg; |
furthermore, f(0)=g(0)=δij(0)=0.
(A2) For ij∈Λ, a0ij∈AP(R,R+),acij∈AP(R,A),τij,σij∈AP(R,R+)∩C1(R,R) satisfying 1−˙τ+ij,1−˙σ+ij>0, Cklij,Bklij,Eklij∈AP(R,A), L=(L11,L12,⋯,Lmn)∈Lploc(R,Lp(Ω,Am×n)) is almost periodic in the sense of Stepanov.
(A3) For p>2,1p+1q=1,
0<r1:=8p4maxij∈Λ{(pqa_0ij)pqqpa_0ij[(ˉacij)p+(∑Ckl∈Nh1(i,j)(Cklij+)q)pq(2κLf+Mf)p+(∑Ckl∈Nh2(i,j)(Bklij+)q)pq((2κLg+Mg)∫∞0|Kij(u)|du)p]+Cp(p−22a_0ij)p−22qpa_0ij(∑Ckl∈Nh3(i,j)(Eklij+)q)pq(Lδij)p}<1, |
and for p=2,
0<r2:=16maxij∈Λ{1(a_0ij)2[(ˉacij)2+∑Ckl∈Nh1(i,j)(Cklij+)2(2κLf+Mf)2+∑Ckl∈Nh2(i,j)(Bklij+)2×((2κLg+Mg)∫∞0|Kij(u)|du)2]+12a_0ij∑Ckl∈Nh3(i,j)(Eklij+)2(Lδij)2}<1. |
(A4) For 1p+1q=1,
0<qpa_0ρ1:=16p−1qpa_0maxij∈Λ{(pqa_0ij)pq[(ˉacij)p+(∑Ckl∈Nh1(i,j)(Cklij+)q)pq[2p−1(Lf)p×∑Ckl∈Nh1(i,j)epqa_0ijτkl+(2κ)p1−˙τ+kl+(Mf)p]+(∑Ckl∈Nh2(i,j)(Bklij+)q)pq[(2κLg×∫∞0|Kij(u)|du)p+(Mg∫∞0|Kij(u)|du)p]]+2p−1Cp(p−22a_0ij)p−22×(∑Ckl∈Nh3(i,j)(Eklij+)q)pq(Lδij)pepqa_0ijσ+ij1−˙σ+ij}<1,(p>2), |
0<ρ2a_0:=32a_0maxij∈Λ{(1a_0ij)∑Ckl∈Nh1(i,j)(Cklij+)2[(Lf)2∑Ckl∈Nh1(i,j)ea_0ijτkl+(2κ)21−˙τ+kl+(Mf)22]+∑Ckl∈Nh3(i,j)(Eklij+)2(Lδij)2e2a_0ijσ+ij1−˙σ+ij+12a_0ij∑Ckl∈Nh2(i,j)(Bklij+)2(4κ2L2g+M2g)×(∫∞0|Kij(u)|du)2+(ˉacij)22a_0ij}<1,(p=2). |
(A5) The kernel Kij is almost periodic and there exist constants M>0 and u>0 such that |Kij(t)|≤Me−ut for all t∈R.
Let X indicate the space of all Lp-bounded and Lp-uniformly continuous stochastic processes from R to Lp(Ω,Am×n), then with the norm ‖ϕ‖X=supt∈R{E‖ϕ(t)‖p0}1p, where ϕ=(ϕ11,ϕ12,…,ϕmn)∈X, it is a Banach space.
Set ϕ0=(ϕ011,ϕ012,…,ϕ0mn)T, where ϕ0ij(t)=∫t−∞e−∫tsa0ij(u)duLij(s)ds,t∈R,ij∈Λ. Then, ϕ0 is well defined under assumption (A2). Consequently, we can take a constant κ such that κ≥‖ϕ0‖X.
Definition 3.1. [37] An Ft-progressively measurable stochastic process x(t)=(x11(t),x12(t),…,xmn(t))T is called a solution of system (2.1), if x(t) solves the following integral equation:
xij(t)=xij(t0)e−∫tt0a0ij(u)du+∫tt0e−∫tsa0ij(u)du[−acij(s)xij(s)+∑Ckl∈Nh1(i,j)Cklij(s)×f(xkl(s−τkl(s)))xij(s)+∑Ckl∈Nh2(i,j)Bklij(s)∫∞0Kij(u)g(x(s−u))duxij(s)+Lij(s)]ds+∫tt0e−∫tsa0ij(u)du∑Ckl∈Nh3(i,j)Eklij(s)δij(xij(s−σij(s)))dwij(s). | (3.1) |
In (3.1), let t0→−∞, then one gets
xij(t)=∫t−∞e−∫tsa0ij(u)du[−acij(s)xij(s)+∑Ckl∈Nh1(i,j)Cklij(s)f(xkl(s−τkl(s)))xij(s)+∑Ckl∈Nh2(i,j)Bklij(s)∫∞0Kij(u)g(x(s−u))duxij(s)+Lij(s)]ds+∫t−∞e−∫tsa0ij(u)du×∑Ckl∈Nh3(i,j)Eklij(s)δij(xij(s−σij(s)))dwij(s),t≥t0,ij∈Λ. | (3.2) |
It is easy to see that if x(t) solves (3.2), then it also solves (2.1).
Theorem 3.1. Assume that (A1)–(A4) hold. Then the system (2.1) has a unique Lp-bounded and Lp-uniformly continuous solution in X∗={ϕ∈X:‖ϕ−ϕ0‖X≤κ}, where κ is a constant satisfying κ≥‖ϕ0‖X.
Proof. Define an operator ϕ:X∗→X as follows:
(Ψϕ)(t)=((Ψ11ϕ)(t),(Ψ12ϕ)(t),…,(Ψmnϕ)(t))T, |
where (ϕ11,ϕ12,…,ϕmn)T∈X, t∈R and
(Ψijϕ)(t)=∫t−∞e−∫tsa0ij(u)du[−acij(s)ϕij(s)+∑Ckl∈Nh1(i,j)Cklij(s)f(ϕkl(s−τkl(s)))ϕij(s)+∑Ckl∈Nh2(i,j)Bklij(s)∫∞0Kij(u)g(ϕkl(s−u))duϕij(s)+Lij(s)]ds+∫t−∞e−∫tsa0ij(u)du∑Ckl∈Nh3(i,j)Eklij(s)δij(ϕij(s−σij(s)))dωij(s),ij∈Λ. | (3.3) |
First of all, let us show that E‖Ψϕ(t)−ϕ0(t)‖p0≤κ for all ϕ∈X∗.
Noticing that for any ϕ∈X∗, it holds
‖ϕ‖X≤‖ϕ0‖X+‖ϕ−ϕ0‖X≤2κ. |
Then, we deduce that
E‖Ψϕ(t)−ϕ0(t)‖p0≤4p−1maxij∈Λ{E‖∫t−∞−e−∫tsa0ij(u)duacij(s)ϕij(s)‖p♭}+4p−1maxij∈Λ{E‖∫t−∞e−∫tsa0ij(u)du×∑Ckl∈Nh1(i,j)Cklij(s)f(ϕkl(s−τkl(s)))ϕij(s)ds‖p♭}+4p−1maxij∈Λ{E‖∫t−∞e−∫tsa0ij(u)du×∑Ckl∈Nh2(i,j)Bklij(s)∫∞0Kij(u)g(ϕkl(s−u))duϕij(s)ds‖p♭}+4p−1maxij∈Λ{E‖∫t−∞e−∫tsa0ij(u)du∑Ckl∈Nh3(i,j)Eklij(s)δij(ϕij(s−σij(s)))dωij(s)‖p♭}:=F1+F2+F3+F4. | (3.4) |
By the Hölder inequality, we have
F2≤4p−1maxij∈Λ{E‖[∫t−∞e−qp∫tsa0ij(u)duds]pq[∫t−∞e−pq∫tsa0ij(u)du×(∑Ckl∈Nh1(i,j)Cklij(s)f(ϕkl(s−τkl(s)))ϕij(s))pds]‖♭}≤4p−1maxij∈Λ{(pqa_0ij)pqE[∫t−∞e−pq∫tsa0ij(u)du(∑Ckl∈Nh1(i,j)(‖Cklij(s)‖♭)q)pq×∑ij∈Λ(2κLf)p‖ϕij(s)‖p♭ds]}≤4p−1maxij∈Λ{(pqa_0ij)pqqpa_0ij(∑Ckl∈Nh1(i,j)(Cklij+)q)pq(2κLf)p}‖ϕ‖pX. | (3.5) |
Similarly, one has
F1≤4p−1maxij∈Λ{(pqa_0ij)pqqpa_0ij(ˉacij)p}‖ϕ‖pX, | (3.6) |
F3≤4p−1maxij∈Λ{(pqa_0ij)pqqpa_0ij(∑Ckl∈Nh2(i,j)(Bklij+)q)pq(2κLg∫∞0|Kij(u)|du)p}‖ϕ‖pX. | (3.7) |
By the Burkolder-Davis-Gundy inequality and the Hölder inequality, when p>2, we infer that
F4≤4p−1Cpmaxij∈Λ{E[∫t−∞‖e−∫tsa0ij(u)du∑Ckl∈Nh3(i,j)Eklij(s)δij(ϕij(s−σij(s)))‖2♭ds]p2}≤4p−1Cpmaxij∈Λ{E[e−2∫tsa0ij(u)du‖∑Ckl∈Nh3(i,j)Eklijδij(ϕij(s−σij(s)))‖2♭ds]p2}≤4p−1Cpmaxij∈Λ{E[∫t−∞(e−2∫tsa0ij(u)du)pp−2×1pds]p−2p×p2×E[∫t−∞(e−2∫tsa0ij(u)du)1q×p2(‖∑Ckl∈Nh3(i,j)Eklij(s)δijϕij(s−σij(s))‖2♭)p2ds]}≤4p−1Cpmaxij∈Λ{(p−22a_0ij)p−22qpa_0ijE‖∑Ckl∈Nh3(i,j)Eklij(s)δij(ϕij(s−σij(s)))‖p♭}≤4p−1Cpmaxij∈Λ{(p−22a_0ij)p−22qpa_0ij(∑Ckl∈Nh3(i,j)(Eklij+)q)pq(Lδij)p}‖ϕ‖pX. | (3.8) |
When p=2, by the Itˆo isometry, it follows that
F4≤4maxij∈Λ{E[∫t−∞e−2∫tsa0ij(u)du‖∑Ckl∈Nh3(i,j)Eklij(s)δij(ϕij(s−σij(s)))‖2Ads]}≤4maxij∈Λ{12a_0ij∑Ckl∈Nh3(i,j)(Eklij+)2(Lδij)2}‖ϕ‖2X. | (3.9) |
Putting (3.5)–(3.9) into (3.4), we obtain that
‖Ψϕ−ϕ0‖pX≤4p−1maxij∈Λ{(pqa_0ij)pqqpa_0ij[(ˉacij)p+(∑Ckl∈Nh1(i,j)(Cklij+)q)pq(2κLf)p+(∑Ckl∈Nh2(i,j)(Bklij+)q)pq(2κLg∫∞0|Kij(u)|du)p]+Cp(p−22a_0ij)p−22qpa_0ij(∑Ckl∈Nh3(i,j)(Eklij+)q)pq(Lδij)p}‖ϕ‖pX≤κp,(p>2), | (3.10) |
and
‖Ψϕ−ϕ0‖2X≤4maxij∈Λ{1(a−ij)2[(ˉacij)2+∑Ckl∈Nh1(i,j)(Cklij+)2(2κLf)2+∑Ckl∈Nh2(i,j)(Bklij+)2(2κLg×∫∞0|Kij(u)|du)2]+12a_0ij∑Ckl∈Nh3(i,j)(Eklij+)2(Lδij)2}‖ϕ‖2X≤κ2,(p=2). | (3.11) |
It follows from (3.10), (3.11) and (A3) that ‖Ψϕ−ϕ0‖X≤κ.
Then, using the same method as that in the proof of Theorem 3.2 in [21], we can show that Ψϕ is Lp-uniformly continuous. Therefore, we have Ψ(X∗)⊂X∗.
Last, we will show that Ψ is a contraction mapping. Indeed, for any ψ,φ∈X∗, when p>2, we have
E‖(Φφ)(t)−(Φψ)(t)‖p0≤4p−1maxij∈Λ{E‖∫t−∞e−∫tsa0ij(u)du(−acij(s)φij(s)+acij(s)ψij(s))ds‖p♭}+4p−1maxij∈Λ{E‖∫t−∞e−∫tsa0ij(u)du∑Ckl∈Nh1(i,j)Cklij(s)[f(φkl(s−τkl(s)))φij(s)−f(ψkl(s−τkl(s)))ψij(s)]ds‖p♭}+4p−1maxij∈Λ{E‖∫t−∞e−∫tsa0ij(u)du∑Ckl∈Nh2(i,j)Bklij(s)×[∫∞0Kij(u)g(φkl(s−u))duφij(s)−∫∞0Kij(u)g(ψkl(s−u))duψij(u)]ds‖p♭}+4p−1maxij∈Λ{E‖∫t−∞e−∫tsa0ij(u)du∑Ckl∈Nh3(i,j)Eklij(s)[δij(φij(s−σij(s)))−δij(ψij(s−σij(s)))]dωij(s)‖p♭}≤4p−1maxij∈Λ{(pqa_0ij)pqqpa_0ij[(ˉacij)p+(∑Ckl∈Nh1(i,j)(Cklij+)q)pq(2κLf+Mf)p+(∑Ckl∈Nh2(i,j)(Bklij+)q)pq((2κLg+Mg)∫∞0|Kij(u)|du)p]+Cp(p−22a_0ij)p−22qpa_0ij×(∑Ckl∈Nh3(i,j)(Eklij+)q)pq(Lδij)p}‖φ−ψ‖pX. | (3.12) |
Similarly, for , we can get
(3.13) |
From (3.12) and (3.13) it follows that
Hence, by virtue of , is a contraction mapping. So, has a unique fixed point in , i.e., (2.1) has a unique solution in .
Theorem 3.2. Assume that – hold. Then the system (2.1) has a unique -th Stepanov-like almost periodic solution in the distribution sense in , where is a constant satisfying .
Proof. From Theorem 3.1, we know that (2.1) has a unique solution in . Now, let us show that is Stepanov-like almost periodic in distribution. Since , it is -uniformly continuous and satisfies . So, for any , there exists , when , we have . Hence, we derive that
(3.14) |
For the above, according to , we have, for ,
As , by (3.14), there holds
Based on (3.2), we can infer that
in which is a Brownian motion having the same distribution as .
Let us consider the process
(3.15) |
From (3.2) and (3.15), we deduce that
(3.16) |
Employing the Hölder inequality, we can obtain
By a change of variables and Fubini's theorem, we infer that
(3.17) |
where
Similarly, when , one can obtain
(3.18) |
where
(3.19) |
and when , we have
(3.20) |
where
(3.21) |
In the same way, we can get
(3.22) |
(3.23) |
(3.24) |
(3.25) |
(3.26) |
(3.27) |
(3.28) |
(3.29) |
Noting that
(3.30) |
We can gain
(3.31) |
(3.32) |
(3.33) |
(3.34) |
when , we have
(3.35) |
for , we get
(3.36) |
Substituting (3.17)–(3.36) into (3.16), we have the following two cases:
Case 1. When , we have
where is the same as that in and
By , we know . Hence, we derive that
(3.37) |
Case 2. When , we can obtain
where is defined in and
Similar to the previous case, by , we know and hence, we can get that
(3.38) |
Noting that
Hence, we have
(3.39) |
Combining (3.37)–(3.39), we can conclude that is -th Stepanov almost periodic in the distribution sense. The proof is complete.
Similar to the proof of Theorem 3.7 in [21], one can easily show that.
Theorem 3.3. Suppose that – are fulfilled and let be the Stepanov almost periodic solution in the distribution sense of system (2.1) with initial value . Then there exist constants and such that for an arbitrary solution with initial value satisfies
where , i.e., the solution is globally exponentially stable.
The purpose of this section is to demonstrate the effectiveness of the results obtained in this paper through a numerical example.
In neural network (2.1), choose , , and
and let . Then we get
Take , , then we have
And when , we have
Thus, all assumptions in Theorems 3.2 and 3.3 are fulfilled. So we can conclude that the system (2.1) has a unique -almost periodic solution in the distribution sense which is globally exponentially stable.
The results are also verified by the numerical simulations in Figures 1–4.
From these figures, we can observe that when the four primitive components of each solution of this system take different initial values, they eventually tend to stabilize. It can be seen that these solutions that meet the above conditions do exist and are exponentially stable.
In this article, we establish the existence and global exponential stability of Stepanov almost periodic solutions in the distribution sense for a class of stochastic Clifford-valued SICNNs with mixed delays. Even when network (2.1) degenerates into a real-valued NN, the results of this paper are new. In fact, uncertainty, namely fuzziness, is also a problem that needs to be considered in real system modeling. However, we consider only the disturbance of random factors and do not consider the issue of fuzziness. In a NN, considering the effects of both random perturbations and fuzziness is our future direction of effort.
The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.
This work is supported by the National Natural Science Foundation of China under Grant No. 12261098.
The authors declare that they have no conflicts of interest.
[1] | W. Zhou, H. Li, Q. Tian, Recent advance in content-based image retrieval: a literature survey, preprint, arXiv: 1706.06064. |
[2] |
J. H. Friedman, J. L. Bentley, R. A. Finkel, An algorithm for finding best matches in logarithmic expected time, ACM Trans. Math. Software, 3 (1977), 209–226. https://doi.org/10.1145/355744.355745 doi: 10.1145/355744.355745
![]() |
[3] | A. Gionis, P. Indyk, R. Motwani, Similarity search in high dimensions via hashing, in International Conference on Very Large Data Bases, 99 (1999), 518–529. Available from: https://www.cs.princeton.edu/courses/archive/spring13/cos598C/Gionis.pdf. |
[4] |
Y. Gong, S. Lazebnik, A. Gordo, F. Perronnin, Iterative quantization: a procrustean approach to learning binary codes for large-scale image retrieval, IEEE Trans. Pattern Anal. Mach. Intell., 35 (2012), 2916–2929. https://doi.org/10.1109/TPAMI.2012.193 doi: 10.1109/TPAMI.2012.193
![]() |
[5] | W. J. Li, S. Wang, W. C. Kang, Feature learning based deep supervised hashing with pairwise labels, preprint, arXiv: 1511.03855. |
[6] | Y. Weiss, A. Torralba, R. Fergus, Spectral hashing, in Advances in Neural Information Processing Systems, 21 (2008), 1753–1760. Available from: https://proceedings.neurips.cc/paper_files/paper/2008/file/d58072be2820e8682c0a27c0518e805e-Paper.pdf. |
[7] | W. Liu, J. Wang, S. Kumar, S. F. Chang, Hashing with graphs, in Proceedings of the 28 th International Conference on Machine Learning, (2011), 1–8. Available from: https://storage.googleapis.com/pub-tools-public-publication-data/pdf/37599.pdf. |
[8] | W. Liu, J. Wang, R. Ji, Y. G. Jiang, S. F. Chang, Supervised hashing with kernels, in 2012 IEEE Conference on Computer Vision and Pattern Recognition, (2012), 2074–2081. https://doi.org/10.1109/CVPR.2012.6247912 |
[9] | F. Shen, C. Shen, W. Liu, H. T. Shen, Supervised discrete hashing, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2015), 37–45. |
[10] | W. C. Kang, W. J. Li, Z. H. Zhou, Column sampling based discrete supervised hashing, in Proceedings of the AAAI Conference on Artificial Intelligence, 30 (2016), 1230–1236. https://doi.org/10.1609/aaai.v30i1.10176 |
[11] | Z. Cao, M. Long, J. Wang, P. S. Yu, Hashnet: deep learning to hash by continuation, in Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2017), 5608–5617. |
[12] | H. Zhu, M. Long, J. Wang, Y. Cao, Deep hashing network for efficient similarity retrieval, in Proceedings of the AAAI Conference on Artificial Intelligence, 30 (2016), 2415–2421. https://doi.org/10.1609/aaai.v30i1.10235 |
[13] | H. Liu, R. Wang, S. Shan, X. Chen, Deep supervised hashing for fast image retrieval, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 2064–2072. |
[14] | G. Irie, H. Arai, Y. Taniguchi, Alternating co-quantization for cross-modal hashing, in Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2015), 1886–1894. |
[15] | M. Long, Y. Cao, J. Wang, P. S. Yu, Composite correlation quantization for efficient multimodal retrieval, in Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, (2016), 579–588. https://doi.org/10.1145/2911451.2911493 |
[16] | Y. Cao, M. Long, J. Wang, S. Liu, Deep visual-semantic quantization for efficient image retrieval, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 1328–1337. |
[17] | Y. Cao, M. Long, J. Wang, S. Liu, Collective deep quantization for efficient cross-modal retrieval, in Thirty-First AAAI Conference on Artificial Intelligence, 31 (2017), 3974–3980. https://doi.org/10.1609/aaai.v31i1.11218 |
[18] |
E. Yang, C. Deng, C. Li, W. Liu, J. Li, D. Tao, Shared predictive cross-modal deep quantization, IEEE Trans. Neural Networks Learn. Syst., 29 (2018), 5292–5303. https://doi.org/10.1109/TNNLS.2018.2793863 doi: 10.1109/TNNLS.2018.2793863
![]() |
[19] |
Y. Fu, T. Xiang, Y. Jiang, X. Xue, L. Sigal, S. Gong, Recent advances in zero-shot recognition: toward data-efficient understanding of visual content, IEEE Signal Process Mag., 35 (2017), 112–125. https://doi.org/10.1109/MSP.2017.2763441 doi: 10.1109/MSP.2017.2763441
![]() |
[20] | L. Zhang, T. Xiang, S. Gong, Learning a deep embedding model for zero-shot learning, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 2021–2030. |
[21] | Y. Li, Z. Jia, J. Zhang, K. Huang, T. Tan, Deep semantic structural constraints for zero-shot learning, in Proceedings of the AAAI Conference on Artificial Intelligence, 32 (2018), 7049–7056. https://doi.org/10.1609/aaai.v32i1.12244 |
[22] | A. Farhadi, I. Endres, D. Hoiem, D. A. Forsyth, Describing objects by their attributes, in 2009 IEEE Conference on Computer Vision and Pattern Recognition, (2009), 1778–1785. https://doi.org/10.1109/CVPR.2009.5206772 |
[23] | T. Mikolov, K. Chen, G. Corrado, J. Dean, Efficient estimation of word representations in vector space, preprint, arXiv: 1301.3781. |
[24] |
G. A. Miller, Wordnet: a lexical database for English, Commun. ACM, 38 (1995), 39–41. https://doi.org/10.1145/219717.219748 doi: 10.1145/219717.219748
![]() |
[25] | Y. Guo, G. Ding, J. Han, Y. Gao, Sitnet: discrete similarity transfer network for zero-shot hashing, in Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17), (2017), 1767–1773. Available from: https://www.ijcai.org/proceedings/2017/0245.pdf. |
[26] | Y. Yang, Y. Luo, W. Chen, F. Shen, J. Shao, H. T. Shen, Zero-shot hashing via transferring supervised knowledge, in Proceedings of the 24th ACM International Conference on Multimedia, (2016), 1286–1295. https://doi.org/10.1145/2964284.2964319 |
[27] | Y. Xu, Y. Yang, F. Shen, X. Xu, Y. Zhou, H. T. Shen, Attribute hashing for zero-shot image retrieval, in 2017 IEEE International Conference on Multimedia and Expo (ICME), (2017), 133–138. https://doi.org/10.1109/ICME.2017.8019425 |
[28] | H. Jiang, R. Wang, S. Shan, X. Chen, Learning class prototypes via structure alignment for zero-shot recognition, in Computer Vision – ECCV 2018, (2018), 121–138. https://doi.org/10.1007/978-3-030-01249-6_8 |
[29] | Q. Li, Z. Sun, R. He, T. Tan, Deep supervised discrete hashing, in Advances in Neural Information Processing Systems, 30 (2017), 2479–2488. Available from: https://proceedings.neurips.cc/paper_files/paper/2017/file/e94f63f579e05cb49c05c2d050ead9c0-Paper.pdf. |
[30] | Y. Cao, M. Long, J. Wang, Correlation hashing network for efficient cross-modal retrieval, preprint, arXiv: 1602.06697. |
[31] |
T. Ge, K. He, Q. Ke, J. Sun, Optimized product quantization, IEEE Trans. Pattern Anal. Mach. Intell., 36 (2013), 744–755. https://doi.org/10.1109/TPAMI.2013.240 doi: 10.1109/TPAMI.2013.240
![]() |
[32] |
A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, Commun. ACM, (2017), 84–90. https://doi.org/10.1145/3065386 doi: 10.1145/3065386
![]() |
[33] | Y. Liu, H. Li, X. Wang, Rethinking feature discrimination and polymerization for large-scale recognition, preprint, arXiv: 1710.00870. |
[34] | A. Lazaridou, G. Dinu, M. Baroni, Hubness and pollution: delving into cross-space mapping for zero-shot learning, in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, 1 (2015), 270–280. https://doi.org/10.3115/v1/P15-1027 |
[35] | A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, et al., Automatic differentiation in pytorch, 2017. Available from: https://openreview.net/forum?id = BJJsrmfCZ. |
[36] |
J. Besag, On the statistical analysis of dirty pictures, J. R. Stat. Soc., 48 (1986), 48–259. https://doi.org/10.1111/j.2517-6161.1986.tb01412.x doi: 10.1111/j.2517-6161.1986.tb01412.x
![]() |
[37] |
C. H. Lampert, H. Nickisch, S. Harmeling, Attribute-based classification for zero-shot visual object categorization, IEEE Trans. Pattern Anal. Mach. Intell., 36 (2013), 453–465. https://doi.org/10.1109/TPAMI.2013.140 doi: 10.1109/TPAMI.2013.140
![]() |
[38] | J. Deng, W. Dong, R. Socher, L. Li, K. Li, F. Li, Imagenet: a large-scale hierarchical image database, in 2009 IEEE Conference on Computer Vision and Pattern Recognition, (2009), 248–255. https://doi.org/10.1109/CVPR.2009.5206848 |