Deep learning methods may decline in their performance when the number of labelled training samples is limited, in a scenario known as few-shot learning. The methods may even degrade the accuracy in classifying instances of classes that have not been seen previously, called zero-shot learning. While the classification results achieved by the zero-shot learning methods are steadily improved, different problem settings, and diverse experimental setups have emerged. It becomes difficult to measure fairly the effectiveness of each proposed method, thus hindering further research in the field. In this article, a comprehensive survey is given on the methodology, implementation, and fair evaluations for practical and applied computing facets on the recent progress of zero-shot learning.
Citation: Guanyu Yang, Zihan Ye, Rui Zhang, Kaizhu Huang. A comprehensive survey of zero-shot image classification: methods, implementation, and fair evaluation[J]. Applied Computing and Intelligence, 2022, 2(1): 1-31. doi: 10.3934/aci.2022001
[1] | Jens Lorenz, Wilberclay G. Melo, Suelen C. P. de Souza . Regularity criteria for weak solutions of the Magneto-micropolar equations. Electronic Research Archive, 2021, 29(1): 1625-1639. doi: 10.3934/era.2020083 |
[2] | Hua Qiu, Zheng-An Yao . The regularized Boussinesq equations with partial dissipations in dimension two. Electronic Research Archive, 2020, 28(4): 1375-1393. doi: 10.3934/era.2020073 |
[3] | Zhi-Ying Sun, Lan Huang, Xin-Guang Yang . Exponential stability and regularity of compressible viscous micropolar fluid with cylinder symmetry. Electronic Research Archive, 2020, 28(2): 861-878. doi: 10.3934/era.2020045 |
[4] | Changjia Wang, Yuxi Duan . Well-posedness for heat conducting non-Newtonian micropolar fluid equations. Electronic Research Archive, 2024, 32(2): 897-914. doi: 10.3934/era.2024043 |
[5] | Jinheng Liu, Kemei Zhang, Xue-Jun Xie . The existence of solutions of Hadamard fractional differential equations with integral and discrete boundary conditions on infinite interval. Electronic Research Archive, 2024, 32(4): 2286-2309. doi: 10.3934/era.2024104 |
[6] | Noelia Bazarra, José R. Fernández, Ramón Quintanilla . Numerical analysis of a problem in micropolar thermoviscoelasticity. Electronic Research Archive, 2022, 30(2): 683-700. doi: 10.3934/era.2022036 |
[7] | Xiaojie Yang, Hui Liu, Haiyun Deng, Chengfeng Sun . Pullback D-attractors of the three-dimensional non-autonomous micropolar equations with damping. Electronic Research Archive, 2022, 30(1): 314-334. doi: 10.3934/era.2022017 |
[8] | Wenlong Sun . The boundedness and upper semicontinuity of the pullback attractors for a 2D micropolar fluid flows with delay. Electronic Research Archive, 2020, 28(3): 1343-1356. doi: 10.3934/era.2020071 |
[9] | Haibo Cui, Junpei Gao, Lei Yao . Asymptotic behavior of the one-dimensional compressible micropolar fluid model. Electronic Research Archive, 2021, 29(2): 2063-2075. doi: 10.3934/era.2020105 |
[10] | José Luis Díaz Palencia, Saeed Ur Rahman, Saman Hanif . Regularity criteria for a two dimensional Erying-Powell fluid flowing in a MHD porous medium. Electronic Research Archive, 2022, 30(11): 3949-3976. doi: 10.3934/era.2022201 |
Deep learning methods may decline in their performance when the number of labelled training samples is limited, in a scenario known as few-shot learning. The methods may even degrade the accuracy in classifying instances of classes that have not been seen previously, called zero-shot learning. While the classification results achieved by the zero-shot learning methods are steadily improved, different problem settings, and diverse experimental setups have emerged. It becomes difficult to measure fairly the effectiveness of each proposed method, thus hindering further research in the field. In this article, a comprehensive survey is given on the methodology, implementation, and fair evaluations for practical and applied computing facets on the recent progress of zero-shot learning.
In this paper, we consider the three-dimensional magneto-micropolar fluid equations with fractional dissipation
{∂tu+μ(−Δ)αu−χΔu+u⋅∇u−b⋅∇b+∇p−2χ∇×v=0,∂tv+η(−Δ)βv−κ∇∇⋅v+4χv+u⋅∇v−2χ∇×u=0,∂tb+λ(−Δ)γb+u⋅∇b−b⋅∇u=0,∇⋅u=0,∇⋅b=0, | (1.1) |
with an initial value
t=0:u=u0(x),v=v0(x),b=b0(x),x∈R3. | (1.2) |
Here u=u(x,t), v=v(x,t), b=b(x,t)∈R3, and p=p(x,t)∈R are the velocity, micro-rotational velocity, magnetic fields, and scalar pressure, respectively. μ, χ, and 1λ represent the kinematic viscosity, vortex viscosity, and magnetic Reynolds number, respectively. η and κ are angular viscosities. α, β and γ are the parameters of the fractional dissipations corresponding to the velocity, micro-rotational velocity and magnetic field, respectively. The fractional Laplace operator (−Δ)α is defined through the Fourier transform as
^(−Δ)αf(ξ)=^Λ2αf=|ξ|2αˆf(ξ). |
The incompressible magneto-micropolar fluid equations have made analytic studies a great challenge but offer new opportunities due to their distinctive mathematical features. Regularity criteria for weak solutions are established by Fan and Zhong [1] in pointwise multipliers for 1≤α=β=γ≤54. Local and global well-posedness have been established in [2,3,4], respectively. For α=β=γ=1, we refer to [5,6,7] for the existence of strong solutions and weak solutions, respectively. In the study field of the magneto-micropolar fluid equations, regularity criteria for weak solutions and blow-up criteria for smooth solutions are very important topics. The readers may refer to regularity criteria of weak solutions in Morrey-Campanato space [8], in Lorentz space [9], Besov space [10], Triebel-Lizorkin space [11] and other regularity criteria for weak solutions [12,13,14,15], and [16,17] for blow-up criteria of smooth solutions in different function spaces, respectively. Serrin-type regularity criteria for weak solutions via the velocity fields and the gradient of the velocity field were established in Yuan [13], respectively. We may refer to [18,19,20] for global well-posedness. On the other hand, the global regularity of weak solutions to (1.1) with partial viscosities becomes more complex. In the case of 2D, we may refer to [22,23,24,25], and in the case of 3D, we may refer to [26,27].
If v=0 and χ=0, then (1.1) reduces to MHD equations with fractional dissipation. The MHD equations govern the dynamics of the velocity and magnetic fields in electrically conducting fluids such as plasmas, liquid metals, and salt water. We only recall regularity criteria for our purpose. If α,β>54, some regularity criteria have been established by Wu [28,29], which are given in terms of the velocity u. If 1≤α=β≤32, Zhou [30] obtained the Serrin-type criteria u∈LpTLqx with 2αp+3q≤2α−1 and 32α−1<q≤∞. Later, Yuan [14] extended the above function space Lq to Bsq,∞. Recently, the regularity criterion involving u3,b∈LωTLqx is given in [31]. We also refer to [32,33] for well-posedness and [34] for blow up criterion of smooth solutions.
Motivated by the Serrin-type regularity criterion of weak solutions to Navier-Stokes equations [35,36] and MHD equations [30,31]. The main purpose is to investigate the regularity criterion of weak solutions to the systems (1.1) and (1.2) in this paper and establish the Serrin-type regularity criterion of weak solutions involving partial components. We state our main result as follows:
Theorem 1.1. Let 1≤α=β=γ≤32 and χ,κ≥0. Assume that (u0,v0,b0)∈H1(R3) and ∇⋅u0=∇⋅b0=0. Furthermore, if
u3,v,b∈Lϱ(0,T;Lq(R3)), |
with
2αϱ+3q≤34(2α−1)+3(1−ϵ)4q, 3+ϵ2α−1<q≤∞, 0<ϵ≤13, | (1.3) |
then the solution (u,v,b) to the systems (1.1) and (1.2) remains smooth on [0,T].
Remark 1.2. Since the concrete values of the constants μ, η, and λ play no role in our proof, for this reason, we shall assume them to be all equal to one throughout this paper. For convenience of description, we define horizontal derivatives ∇h:=(∂1,∂2).
Remark 1.3. When v=0 and χ=0, the conclusion in Theorem 1.1 is reduced to the one in [31].
Remark 1.4. Compared with [31], the main difficulty in this paper comes from the nonlinear term u⋅∇v. In order to overcome the difficulty caused by the nonlinear term, owing to the energy functional (see (2.2)), we first use integrating by parts and ∇⋅u=0 to transform it into a control of the horizontal derivative, and then use Hölder's inequality, multiplicative Sobolev inequality, the Gagliardo-Nirenberg inequality, and Young's inequality to control the nonlinear term.
In this section, our main purpose is to complete the proof of Theorem 1.1. To this end, we introduce the following lemma:
Lemma 2.1. ([37]) The multiplicative Sobolev inequality
‖∇u‖L3q≤C‖∂1∇u‖13L2‖∂2∇u‖13L2‖∂3∇u‖13Lq, 1≤q<∞, | (2.1) |
holds.
In what follows, we prove Theorem 1.1.
Proof. Let
E(t):=‖∇hu(t)‖2L2+‖∇hv(t)‖2L2+‖∇hb(t)‖2L2+∫t0(‖∇hΛαu(τ)‖2L2+‖∇hΛαv(τ)‖2L2+‖∇hΛαb(τ)‖2L2)dτ+κ∫t0‖∇h∇⋅v(τ)‖2L2dτ. | (2.2) |
The proof is divided into two cases: 3+ϵ2α−1<q<∞ and q=∞. We first consider the case 3+ϵ2α−1<q<∞.
Taking the inner product of the first three equations of (1.1) with (u,v,b), and adding them up, using integrating by parts, the divergence-free condition, and Cauchy inequality, we obtain
12ddt(‖u(t)‖2L2+‖v(t)‖2L2+‖b(t)‖2L2)+‖Λαu(t)‖2L2+‖Λαv(t)‖2L2+‖Λαb(t)‖2L2+κ‖∇⋅v(t)‖2L2≤0. |
Integrating the above inequality with respect to t and then obtaining
‖u(t)‖2L2+‖v(t)‖2L2+‖b(t)‖2L2+2∫t0(‖Λαu(τ)‖2L2+‖Λαv(τ)‖2L2+‖Λαb(τ)‖2L2+κ‖∇⋅v(τ)‖2L2)dτ≤‖u0‖2L2+‖v0‖2L2+‖b0‖2L2. |
By multiplying the first three equations of (1.1) by Δhu, Δhv, and Δhb, respectively, and adding them up, using integrating by parts and the divergence-free condition, we have
12ddt(‖∇hu(t)‖2L2+‖∇hv(t)‖2L2+‖∇hb(t)‖2L2)+‖∇hΛαu(t)‖2L2+‖∇hΛαv(t)‖2L2+‖∇hΛαb(t)‖2L2+κ‖∇h∇⋅v(t)‖2L2+χ‖∇h∇u(t)‖2L2+4χ‖∇hv‖2L2:=6∑i=1Ii, | (2.3) |
where
I1=∫R3(u⋅∇u)⋅Δhudx,I2=−∫R3(b⋅∇b)⋅Δhudx,I3=∫R3(u⋅∇b)⋅Δhbdx,I4=−∫R3(b⋅∇u)⋅Δhbdx,I5=∫R3(u⋅∇v)⋅Δhvdx,I6=−2χ∫R3[(∇×v)⋅Δhu+(∇×u)⋅Δhv]dx. |
Thanks to integration by parts and Cauchy's inequality, we arrive at
I6=4χ∫R3∇h(∇×u)⋅∇hvdx≤χ‖∇h(∇×u)‖2L2+4χ‖∇hv‖2L2=χ‖∇h∇u‖2L2+4χ‖∇hv‖2L2. | (2.4) |
For I1, we divide it into the following three items: I1i(i=1,2,3) as
I1=2∑j,k=1∫R3uj∂jukΔhukdx+3∑j=1∫R3uj∂ju3Δhu3dx+2∑k=1∫R3u3∂3ukΔhukdx:=I11+I12+I13. | (2.5) |
The divergence-free condition and integration by parts entail that
I11=2∑i,j,k=1∫R3uj∂juk∂2iiukdx=−2∑i,j,k=1∫R3∂iuj∂juk∂iukdx+122∑i,j,k=1∫R3∂juj|∂iuk|2dx=−2∑i,j,k=1∫R3∂iuj∂juk∂iukdx−122∑i,k=1∫R3∂3u3|∂iuk|2dx=−∫R3∂1u1∂1u1∂1u1dx−∫R3∂1u1∂1u2∂1u2dx−∫R3∂1u2∂2u1∂1u1dx−∫R3∂1u2∂2u2∂1u2dx−∫R3∂2u1∂1u1∂2u1dx−∫R3∂2u1∂1u2∂2u2dx−∫R3∂2u2∂2u1∂2u1dx−∫R3∂2u2∂2u2∂2u2dx−122∑i,k=1∫R3∂3u3|∂iuk|2dx=−∫R3∂1u1∂1u1∂1u1dx−∫R3∂2u2∂2u2∂2u2dx+∫R3∂3u3∂2u1∂2u1dx+∫R3∂3u3∂1u2∂1u2dx+∫R3∂3u3∂2u1∂1u2dx−122∑i,k=1∫R3∂3u3|∂iuk|2dx=122∑j,k=1∫R3∂3u3∂kuj∂kujdx−∫R3∂3u3∂1u1∂2u2dx+∫R3∂3u3∂2u1∂1u2dx=−2∑j,k=1∫R3u3∂23kuj∂kujdx+∫R3u3(∂232u2∂1u1+∂231u1∂2u2)dx−∫R3u3(∂232u1∂1u2+∂231u2∂2u1)dx, | (2.6) |
and
I12=−3∑j=12∑l=1∫R3∂luj∂ju3∂lu3dx=3∑j=12∑l=1∫R3∂luju3∂2jlu3dx. | (2.7) |
Therefore, we obtain
|I1|≤C∫R3|u3||∇u||∇h∇u|dx. | (2.8) |
From Hölder's inequality, Lemma 2.1, the Gagliardo-Nirenberg inequality, and Young's inequality, it follows that
|I1|≤C∫R3|u3||∇u||∇h∇u|dx≤C‖u3‖Lq‖∇u‖Lθ1‖∇h∇u‖Lθ2≤C‖u3‖Lq‖∇h∇u‖23L2‖Δu‖13Lθ13‖∇h∇u‖Lθ2≤C‖u3‖Lq‖∇hu‖2s13L2‖∇hΛαu‖2(1−s1)3L2‖∇u‖s23L2‖Λα+1u‖1−s23L2‖∇hu‖s3L2‖∇hΛαu‖1−s3L2≤C‖u3‖Lq‖∇u‖2s13L2‖∇hΛαu‖2(1−s1)3L2‖∇u‖s23L2‖Λα+1u‖1−s23L2‖∇u‖s3L2‖∇hΛαu‖1−s3L2≤C‖u3‖Lq‖∇u‖2s13+s23+s3L2‖Λα+1u‖1−s23L2‖∇hΛαu‖2(1−s1)3+1−s3L2≤C[‖u3‖Lq‖∇u‖2s13+s23+s3L2‖Λα+1u‖1−s23L2]m′+16‖∇hΛαu‖(2(1−s1)3+1−s3)mL2, | (2.9) |
where the constants 1<θ1,θ2,m,m′<∞ and 0≤s1,s2,s3≤1 satisfy
{1θ1+1θ2+1q=1,2−32=(1−32)s1+(1+α−32)(1−s1),2−3θ1/3=(1−32)s2+(1+α−32)(1−s2),2−3θ2=(1−32)s3+(1+α−32)(1−s3),1m+1m′=1,(2(1−s1)3+1−s3)m=2. | (2.10) |
Noting that 1≤α≤32 and 3+ϵ2α−1<q≤∞, one solution to (2.10) can be written as
{θ1=18q5q−18ϵ,θ2=18q13q−18(1−ϵ),s1=1−1α,s2=1−9ϵαq,s3=1−13α−3(1−ϵ)αq,m=2αqq+3(1−ϵ),m′=2αq(2α−1)q−3(1−ϵ). | (2.11) |
To bound I3, we decompose it into three pieces as
I3=2∑j,k=1∫R3uj∂jbkΔhbkdx+2∑j=1∫R3uj∂jb3Δhb3dx+3∑k=1∫R3u3∂3bkΔhbkdx:=I31+I32+I33. | (2.12) |
By using integrating by parts (see[31]), we have
I31=2∑j,k,l=1∫R3[∂2lluj∂jbkbk+∂luj∂2ljbkbk]dx−122∑j,k,l=1∫R3[∂2ljuj∂lbkbk+∂juj∂2llbkbk]dx. | (2.13) |
Similarly, we have
I32=2∑j,l=1∫R3[∂2lluj∂jb3b3+∂luj∂2ljb3b3]dx−122∑j,k,l=1∫R3[∂2ljuj∂lb3b3+∂juj∂2llb3b3]dx, | (2.14) |
and
I33=3∑k=12∑l=1∫R3[∂23lu3∂lbkbk+∂lu3∂23lbkbk]dx+123∑k=12∑j,l=1∫R3[∂2ljuj∂lbkbk+∂juj∂2llbkbk]dx. | (2.15) |
Collecting (2.13)–(2.15), it is easy to derive that
|I3|≤C∫R3|b|(|∇u|+|∇b|)(|∇h∇u|+|∇h∇b|)dx. | (2.16) |
Furthermore, we have
|I2+I3+I4|≤C∫R3|b|(|∇u|+|∇b|)(|∇h∇u|+|∇h∇b|)dx. | (2.17) |
Similar to (2.13), it follows from Hölder's inequality, Lemma 2.1, Gagliardo-Nirenberg inequality, and Young's inequality that
|I2+I3+I4|≤C∫R3|b|(|∇u|+|∇b|)(|∇h∇u|+|∇h∇b|)dx≤C‖b‖Lq‖|∇u|+|∇b|‖Lθ1‖|∇h∇u|+|∇h∇b|‖Lθ2≤C‖b‖Lq(‖∇h∇u‖23L2‖Δu‖13Lθ13+‖∇h∇b‖23L2‖Δb‖13Lθ13)⋅(‖∇h∇u‖Lθ2+‖∇h∇b‖Lθ2)≤C‖b‖Lq(‖∇u‖2s13L2‖∇hΛαu‖2(1−s1)3L2‖∇u‖s23L2‖Λα+1u‖1−s23L2+‖∇b‖2s13L2‖∇hΛαb‖2(1−s1)3L2‖∇b‖s23L2‖Λα+1b‖1−s23L2)⋅(‖∇u‖s3L2‖∇hΛαu‖1−s3L2+‖∇b‖s3L2‖∇hΛαb‖1−s3L2)≤C‖b‖Lq(‖∇u‖2s13L2+‖∇b‖2s13L2)(‖∇hΛαu‖2(1−s1)3L2+‖∇hΛαb‖2(1−s1)3L2)⋅(‖∇u‖s23L2+‖∇b‖s23L2)(‖Λα+1u‖1−s23L2+‖Λα+1b‖1−s23L2)⋅(‖∇u‖s3L2+‖∇b‖s3L2)(‖∇hΛαu‖1−s3L2+‖∇hΛαb‖1−s3L2)≤C‖b‖Lq(‖∇u‖L2+‖∇b‖L2)2s13+s23+s3(‖Λα+1u‖L2+‖Λα+1b‖L2)1−s23⋅(‖∇hΛαu‖L2+‖∇hΛαb‖L2)2(1−s1)3+1−s3≤C[‖b‖Lq(‖∇u‖L2+‖∇b‖L2)2s13+s23+s3(‖Λα+1u‖L2+‖Λα+1b‖L2)1−s23]m′+16(‖∇hΛαu‖L2+‖∇hΛαb‖L2)(2(1−s1)3+1−s3)m, | (2.18) |
where the constants 1<θ1,θ2,m,m′<∞ and 0≤s1,s2,s3≤1 satisfy (2.10).
Similar to I3, we bound I5 as
|I5|≤C∫R3|v|(|∇u|+|∇v|)(|∇h∇u|+|∇h∇v|)dx. | (2.19) |
Using the same steps as (2.18), we obtain
|I5|≤C∫R3|v|(|∇u|+|∇v|)(|∇h∇u|+|∇h∇v|)dx≤C[‖v‖Lq(‖∇u‖L2+‖∇v‖L2)2s13+s23+s3(‖Λα+1u‖L2+‖Λα+1v‖L2)1−s23]m′+16(‖∇hΛαu‖L2+‖∇hΛαv‖L2)(2(1−s1)3+1−s3)m, |
where the constants 1<θ1,θ2,m,m′<∞ and 0≤s1,s2,s3≤1 satisfy (2.10).
Combining (2.3), (2.4), (2.9), (2.18), and (2.20), we arrive at
ddt(‖∇hu(t)‖2L2+‖∇hv(t)‖2L2+‖∇hb(t)‖2L2)+‖∇hΛαu(t)‖2L2+‖∇hΛαv(t)‖2L2+‖∇hΛαb(t)‖2L2+κ‖∇h∇⋅v(t)‖2L2≤C‖u3‖2αq(2α−1)q−3(1−ϵ)Lq‖∇u‖2((2α−1)q−3)(2α−1)q−3(1−ϵ)L2‖Λα+1u‖6ϵ(2α−1)q−3(1−ϵ)L2+‖b‖2αq(2α−1)q−3(1−ϵ)Lq(‖∇u‖L2+‖∇b‖L2)2((2α−1)q−3)(2α−1)q−3(1−ϵ)(‖Λα+1u‖L2+‖Λα+1b‖L2)6ϵ(2α−1)q−3(1−ϵ)+‖v‖2αq(2α−1)q−3(1−ϵ)Lq(‖∇u‖L2+‖∇v‖L2)2((2α−1)q−3)(2α−1)q−3(1−ϵ)(‖Λα+1u‖L2+‖Λα+1v‖L2)6ϵ(2α−1)q−3(1−ϵ)≤C(‖u3‖Lq+‖b‖Lq+‖v‖Lq)2αq(2α−1)q−3(1−ϵ)(‖∇u‖L2+‖∇b‖L2+‖∇v‖L2)2((2α−1)q−3)(2α−1)q−3(1−ϵ)(‖Λα+1u‖L2+‖Λα+1b‖L2+‖Λα+1v‖L2)6ϵ(2α−1)q−3(1−ϵ). | (2.20) |
Set
Θ1=2αq(2α−1)q−3(1−ϵ),Θ2=2((2α−1)q−3)(2α−1)q−3(1−ϵ),Θ3=6ϵ(2α−1)q−3(1−ϵ). | (2.21) |
Integrating (2.20) with respect to t, we obtain
E(t)≤CJ0+C∫t0(‖u3‖Lq+‖b‖Lq+‖v‖Lq)Θ1(‖∇u‖L2+‖∇b‖L2+‖∇v‖L2)Θ2(‖Λα+1u‖L2+‖Λα+1b‖L2+‖Λα+1v‖L2)Θ3dτ, | (2.22) |
where J0=‖∇u(0)‖2L2+‖∇v(0)‖2L2+‖∇b(0)‖2L2.
By taking the inner product of the first three equations of (1.1) with (−Δu,−Δv,−Δb) and integrating by parts, the divergence-free condition, we have
12ddt(‖∇u(t)‖2L2+‖∇v(t)‖2L2+‖∇b(t)‖2L2)+‖Λα+1u(t)‖2L2+‖Λα+1v(t)‖2L2+‖Λα+1b(t)‖2L2+κ‖∇∇⋅v(t)‖2L2+χ‖∇∇u(t)‖2L2+4χ‖∇v(t)‖2L2:=6∑i=1Ji, | (2.23) |
where
J1=∫R3(u⋅∇u)⋅Δudx,J2=−∫R3(b⋅∇b)⋅Δudx,J3=∫R3(u⋅∇b)⋅Δbdx,J4=−∫R3(b⋅∇u)⋅Δbdx,J5=∫R3(u⋅∇v)⋅Δvdx,J6=−2χ∫R3[(∇×v)⋅Δu+(∇×u)⋅Δv]dx. |
By integration by parts and Cauchy's inequality, we arrive at
J6=4χ∫R3∇(∇×u)⋅∇vdx≤χ‖∇(∇×u)‖2L2+4χ‖∇v‖2L2=χ‖∇∇u‖2L2+4χ‖∇v‖2L2. | (2.24) |
For J1, we divide it into the following three items: J1i(i=1,2,3)
J1=∫R3u3∂3u⋅Δhudx+2∑j=1∫R3uj∂ju⋅Δudx+∫R3u3∂3u⋅∂233udx:=J11+J12+J13. | (2.25) |
Integrating by parts and using the divergence-free condition yields
J11=123∑k=12∑l=1∫R3∂3u3∂luk∂lukdx−3∑k=12∑l=1∫R3∂lu3∂3uk∂lukdx, | (2.26) |
J12=123∑j=13∑k,l=1∫R3∂juj∂luk∂lukdx−2∑j=13∑k,l=1∫R3∂luj∂juk∂lukdx, | (2.27) |
and
J13=123∑k=1∫R3(∂1u1+∂2u2)∂3uk∂3ukdx. | (2.28) |
Therefore, we have
|J1|≤C∫R3|∇hu||∇u|2dx. | (2.29) |
From Hölder's inequality and Lemma 2.1, it follows that
|J1|≤C‖∇hu‖L2‖∇u‖2L4≤C‖∇hu‖L2‖∇u‖2−32αL2‖Λαu‖32αL6≤C‖∇hu‖L2‖∇u‖2−32αL2‖∇hΛαu‖1αL2‖Λα+1u‖12αL2. | (2.30) |
By using integrating by parts and the divergence-free condition, we have
J3=−3∑j,k,l=1∫R3∂l(uj∂jbk)∂lbkdx=−3∑j,k,l=1∫R3(∂luj∂jbk∂lbk+uj∂2ljbk∂lbk)dx=3∑j,k,l=1∫R3bk∂l(∂luj∂jbk)dx=3∑j,k,l=1∫R3(bk∂2lluj∂jbk+bk∂luj∂2jlbk)dx. | (2.31) |
Then we arrive at
|J3|≤C∫R3|b|(|∇u|+|∇b|)(|Δu|+|Δb|)dx. | (2.32) |
Furthermore, we have
|J2+J3+J4|≤C∫R3|b|(|∇u|+|∇b|)(Δu|+|Δb|)dx. | (2.33) |
It follows from the same procedure (2.18) that
|J2+J3+J4|≤C∫R3|b|(|∇u|+|∇b|)(|Δu|+|Δb|)dx≤C‖b‖Lq‖|∇u|+|∇b|‖Lθ1‖|Δu|+|Δb|‖Lθ2≤C‖b‖Lq(‖Δu‖23L2‖Δu‖13Lθ13+‖Δb‖23L2‖Δb‖13Lθ13)(‖Δu‖Lθ2+‖Δb‖Lθ2)≤C‖b‖Lq(‖∇u‖2s13L2‖Λα+1u‖2(1−s1)3L2‖∇u‖s23L2‖Λα+1u‖1−s23L2+‖∇b‖2s13L2‖Λα+1b‖2(1−s1)3L2‖∇b‖s23L2‖Λα+1b‖1−s23L2)×(‖∇u‖s3L2‖Λα+1u‖1−s3L2+‖∇b‖s3L2‖Λα+1b‖1−s3L2)≤C‖b‖Lq(‖∇u‖L2+‖∇b‖L2)2s13+s23+s3(‖Λα+1u‖L2+‖Λα+1b‖L2)2(1−s1)3+1−s23+1−s3≤C‖b‖2αq(2α−1)q−3Lq(‖∇u‖2L2+‖∇b‖2L2)+18(‖Λα+1u‖2L2+‖Λα+1b‖2L2), | (2.34) |
where the constants 1<θ1,θ2,m,m′<∞ and 0≤s1,s2,s3≤1 satisfy (2.10).
Similar to J3, we bound J5 as
|J5|≤C∫R3|v|(|∇u|+|∇v|)(|Δu|+|Δv|)dx. | (2.35) |
The same procedure leads to (2.34) yields
|J5|≤C∫R3|v|(|∇u|+|∇v|)(|Δu|+|Δv|)dx≤C‖v‖2αq(2α−1)q−3Lq(‖∇u‖2L2+‖∇v‖2L2)+18(‖Λα+1u‖2L2+‖Λα+1v‖2L2). |
Combining (2.23), (2.24), (2.30), (2.34), and (2.36), we have
12ddt(‖∇u(t)‖2L2+‖∇v(t)‖2L2+‖∇b(t)‖2L2)+34(‖Λα+1u(t)‖2L2+‖Λα+1v(t)‖2L2)+34‖Λα+1b(t)‖2L2+κ‖∇∇⋅v(t)‖2L2≤C(‖b‖2αq(2α−1)q−3Lq+‖v‖2αq(2α−1)q−3Lq)(‖∇u‖2L2+‖∇b‖2L2+‖∇v‖2L2)+C‖∇hu‖L2‖∇u‖2−32αL2‖∇hΛαu‖1αL2‖Λα+1u‖12αL2. | (2.36) |
Integrating (2.36) over the interval (0,t) and using Hölder's inequality, it was deduced that
12(‖∇u(t)‖2L2+‖∇v(t)‖2L2+‖∇b(t)‖2L2)+34∫t0(‖Λα+1u(τ)‖2L2+‖Λα+1v(τ)‖2L2+‖Λα+1b(τ)‖2L2)dτ+∫t0κ‖∇∇⋅v(τ)‖2L2dτ≤C+C∫t0(‖b‖2αq(2α−1)q−3Lq+‖v‖2αq(2α−1)q−3Lq)(‖∇u‖2L2+‖∇b‖2L2+‖∇v‖2L2)dτ+C∫t0‖∇hu‖L2‖∇u‖2−32αL2‖∇hΛαu‖1αL2‖Λα+1u‖12αL2dτ≤C+C∫t0(‖b‖2αq(2α−1)q−3Lq+‖v‖2αq(2α−1)q−3Lq)(‖∇u‖2L2+‖∇b‖2L2+‖∇v‖2L2)dτ+Csup0≤τ≤t‖∇hu‖L2∫t0‖∇u‖2−32αL2‖∇hΛαu‖1αL2‖Λα+1u‖12αL2dτ. | (2.37) |
From Young's inequality, it follows that
Csup0≤τ≤t‖∇hu‖L2∫t0‖∇u‖2−32αL2‖∇hΛαu‖1αL2‖Λα+1u‖12αL2dτ≤Csup0≤τ≤t‖∇hu‖L2[∫t0‖∇u‖2L2dτ]1−34α[∫t0‖∇hΛαu‖2L2dτ]12α[∫t0‖Λα+1u‖2L2dτ]14α≤Csup0≤τ≤t‖∇hu‖L2[∫t0‖u‖2α1+αL2‖Λα+1u‖21+αL2dτ]1−34α[∫t0‖∇hΛαu‖2L2dτ]12α[∫t0‖Λα+1u‖2L2dτ]14α≤Csup0≤τ≤t‖∇hu‖L2[∫t0‖∇hΛαu‖2L2dτ]12α[∫t0‖Λα+1u‖2L2dτ]14α+4α−34α(1+α)≤Csup0≤τ≤t‖∇hu‖L2[(∫t0‖∇hΛαu‖2L2dτ)12+1][(∫t0‖Λα+1u‖2L2dτ)14+1]≤CE(t)[∫t0‖Λα+1u‖2L2dτ]14+Csup0≤τ≤t‖∇hu‖L2[∫t0‖Λα+1u‖2L2dτ]14+CE(t)+Csup0≤τ≤t‖∇hu‖L2≤CE(t)[∫t0‖Λα+1u‖2L2dτ]14+C(sup0≤τ≤t‖∇hu‖2L2+1)[∫t0‖Λα+1u‖2L2dτ]14+CE(t)+Csup0≤τ≤t‖∇hu‖2L2+C≤CE(t)[∫t0‖Λα+1u‖2L2dτ]14+C[∫t0‖Λα+1u‖2L2dτ]14+CE(t)+C. | (2.38) |
Then, we have
12(‖∇u(t)‖2L2+‖∇v(t)‖2L2+‖∇b(t)‖2L2)+34∫t0(‖Λα+1u(τ)‖2L2+‖Λα+1v(τ)‖2L2+‖Λα+1b(τ)‖2L2)dτ+∫t0κ‖∇∇⋅v(τ)‖2L2dτ≤C+C∫t0(‖b‖2αq(2α−1)q−3Lq+‖v‖2αq(2α−1)q−3Lq)(‖∇u‖2L2+‖∇b‖2L2+‖∇v‖2L2)dτ+CE(t)[∫t0‖Λα+1u‖2L2dτ]14+C[∫t0‖Λα+1u‖2L2dτ]14+CE(t)+C. | (2.39) |
By using Hölder's inequality, Young's inequality, and (2.22), we deduce that
CE(t)≤C+C∫t0(‖u3‖Lq+‖b‖Lq+‖v‖Lq)Θ1(‖∇u‖L2+‖∇b‖L2+‖∇v‖L2)Θ2(‖Λα+1u‖L2+‖Λα+1b‖L2+‖Λα+1v‖L2)Θ3dτ≤C+C[∫t0(‖u3‖Lq+‖b‖Lq+‖v‖Lq)2αq(2α−1)q−3(‖∇u‖L2+‖∇b‖L2+‖∇v‖L2)2dτ]Θ2[∫t0(‖Λα+1u‖L2+‖Λα+1b‖L2+‖Λα+1v‖L2)2dτ]12Θ3≤C+C∫t0(‖u3‖Lq+‖b‖Lq+‖v‖Lq)2αq(2α−1)q−3(‖∇u‖L2+‖∇b‖L2+‖∇v‖L2)2dτ+116∫t0(‖Λα+1u‖L2+‖Λα+1b‖L2+‖Λα+1v‖L2)2dτ. | (2.40) |
Similarly, it follows from (2.22) and Hölder's inequality and Young's inequality that
CE(t)[∫t0‖Λα+1u‖2L2dτ]14≤C[∫t0‖Λα+1u‖2L2dτ]14+C[∫t0‖Λα+1u‖2L2dτ]14∫t0(‖u3‖Lq+‖b‖Lq+‖v‖Lq)Θ1(‖∇u‖L2+‖∇b‖L2+‖∇v‖L2)Θ22(‖Λα+1u‖L2+‖Λα+1b‖L2+‖Λα+1v‖L2)Θ3dτ≤C[∫t0‖Λα+1u‖2L2dτ]14+C[∫t0‖Λα+1u‖2L2dτ]14[∫t0(‖u3‖Lq+‖b‖Lq+‖v‖Lq)2αq(2α−1)q−3(‖∇u‖L2+‖∇b‖L2+‖∇v‖L2)2dτ]Θ22[∫t0(‖Λα+1u‖L2+‖Λα+1b‖L2+‖Λα+1v‖L2)2dτ]Θ32≤C[∫t0‖Λα+1u‖2L2dτ]14+C[∫t0(‖Λα+1u‖L2+‖Λα+1b‖L2+‖Λα+1v‖L2)2dτ]2Θ3+14⋅[∫t0(‖u3‖Lq+‖b‖Lq+‖v‖Lq)2αq(2α−1)q−3(‖∇u‖L2+‖∇b‖L2+‖∇v‖L2)2dτ]Θ2≤C[∫t0‖Λα+1u‖2L2dτ]14+C[∫t0(‖Λα+1u‖L2+‖Λα+1b‖L2+‖Λα+1v‖L2)2dτ]2Θ3+14⋅[∫t0(‖u3‖Lq+‖b‖Lq+‖v‖Lq)Θ4(‖∇u‖L2+‖∇b‖L2+‖∇v‖L2)2dτ]3(2α−1)q+3(1−ϵ)−124[(2α−1)q−3(1−ϵ)]≤C+C∫t0(‖u3‖Lq+‖b‖Lq+‖v‖Lq)Θ4(‖∇u‖L2+‖∇b‖L2+‖∇v‖L2)2dτ+116∫t0(‖Λα+1u‖L2+‖Λα+1b‖L2+‖Λα+1v‖L2)2dτ, | (2.41) |
where Θ4=8αq3(2α−1)q+3(1−ϵ)−12.
We substitute (2.40) and (2.41) into (2.39) and then use Young's inequality to obtain
12(‖∇u(t)‖2L2+‖∇v(t)‖2L2+‖∇b(t)‖2L2)+34∫t0(‖Λα+1u(τ)‖2L2+‖Λα+1v(τ)‖2L2+‖Λα+1b(τ)‖2L2)dτ+∫t0κ‖∇∇⋅v(τ)‖2L2dτ≤C+C∫t0(‖b‖2αq(2α−1)q−3Lq+‖v‖2αq(2α−1)q−3Lq)(‖∇u‖2L2+‖∇b‖2L2+‖∇v‖2L2)dτ+C∫t0(‖u3‖Lq+‖b‖Lq+‖v‖Lq)Θ4(‖∇u‖L2+‖∇b‖L2+‖∇v‖L2)2dτ+C∫t0(‖u3‖Lq+‖b‖Lq+‖v‖Lq)2αq(2α−1)q−3(‖∇u‖L2+‖∇b‖L2+‖∇v‖L2)2dτ+18[∫t0(‖Λα+1u‖L2+‖Λα+1b‖L2+‖Λα+1v‖L2)2dτ]≤C+C∫t0(‖u3‖Θ4Lq+‖b‖Θ4Lq+‖v‖Θ4Lq)(‖∇u‖L2+‖∇b‖L2+‖∇v‖L2)2dτ+14∫t0(‖Λα+1u‖2L2+‖Λα+1b‖2L2+‖Λα+1v‖2L2)dτ. | (2.42) |
Then we have
‖∇u(t)‖2L2+‖∇v(t)‖2L2+‖∇b(t)‖2L2+∫t0(‖Λα+1u(τ)‖2L2+‖Λα+1v(τ)‖2L2+‖Λα+1b(τ)‖2L2)dτ+∫t0κ‖∇∇⋅v(τ)‖2L2dτ≤C+C∫t0(‖u3‖Θ4Lq+‖b‖Θ4Lq+‖v‖Θ4Lq)(‖∇u‖L2+‖∇b‖L2+‖∇v‖L2)2dτ. | (2.43) |
Thanks to Gronwall's inequality and condition (1.3), we obtain
‖∇u(t)‖2L2+‖∇v(t)‖2L2+‖∇b(t)‖2L2+∫t0(‖Λα+1u(τ)‖2L2+‖Λα+1v(τ)‖2L2+‖Λα+1b(τ)‖2L2)dτ+∫t0κ‖∇∇⋅v(τ)‖2L2dτ≤Cexp[C∫t0(‖u3‖Θ4Lq+‖b‖Θ4Lq+‖v‖Θ4Lq)dτ]<∞. | (2.44) |
Finally, we consider the case q=∞. By repeating the above procedure, we derive that
E(t)≤CJ0+C∫t0(‖u3‖L∞+‖b‖L∞+‖v‖L∞)2α2α−1(‖∇u‖L2+‖∇b‖L2+‖∇v‖L2)2dτ. |
Thanks to Gronwall's inequality and condition (1.3), we obtain
‖∇u(t)‖2L2+‖∇v(t)‖2L2+‖∇b(t)‖2L2+∫t0(‖Λα+1u(τ)‖2L2+‖Λα+1v(τ)‖2L2+‖Λα+1b(τ)‖2L2)dτ+∫t0κ‖∇∇⋅v(τ)‖2L2dτ≤Cexp[C∫t0(‖u3‖8α3(2α−1)L∞+‖b‖8α3(2α−1)L∞+‖v‖8α3(2α−1)L∞)dτ]<∞. | (2.45) |
By the above steps, we establish a higher-order a priori estimate of the solutions, and then we obtain that the higher-order norm of the solutions is bounded, thus proving the smoothness of the solutions. This completes the proof of Theorem 1.1.
In this paper, the regularity criterion of the weak solution of the three-dimensional magnetic micropolar fluid equation when 1≤α=β=γ≤32 is studied. However, the regularity of the weak solution of the magnetic micropolar fluid equation when 1≤α,β,γ≤32 on R3 is still an open problem, and it is hoped that the method in this paper can provide inspiration for the solution of this problem.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This research was supported by [the Basic Research Project of Key Scientific Research Project Plan of Universities in Henan Province (Grant No. 20ZX002)].
The authors declare there is no conflict of interest.
[1] | Z. Akata, F. Perronnin, Z. Harchaoui, C. Schmid, Label-embedding for attribute-based classification, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2013), 819–826. https://doi.org/10.1109/CVPR.2013.111 |
[2] |
Z. Akata, F. Perronnin, Z. Harchaoui, C. Schmid, Label-embedding for image classification, IEEE T. Pattern Anal., 38 (2015), 1425–1438. https://doi.org/10.1109/TPAMI.2015.2487986 doi: 10.1109/TPAMI.2015.2487986
![]() |
[3] | Z. Akata, S. Reed, D. Walter, H. Lee, B. Schiele, Evaluation of output embeddings for fine-grained image classification, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2015), 2927–2936. https://doi.org/10.1109/CVPR.2015.7298911 |
[4] | Y. Annadani, S. Biswas, Preserving semantic relations for zero-shot learning, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), 7603–7612. |
[5] | N. Bendre, K. Desai, P. Najafirad, Generalized zero-shot learning using multimodal variational auto-encoder with semantic concepts, IEEE International Conference on Image Processing (ICIP), (2021), 1284–1288. https://doi.org/10.1109/ICIP42928.2021.9506108 |
[6] |
I. Biederman, Recognition-by-components: a theory of human image understanding, Psychol. Rev., 94 (1987), 115. 10.1037/0033-295X.94.2.115 doi: 10.1037/0033-295X.94.2.115
![]() |
[7] | L. Bo, Q. Dong, Z. Hu, Hardness sampling for self-training based transductive zero-shot learning, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2021), 16499–16508. https://doi.org/10.1109/CVPR46437.2021.01623 |
[8] | S. Changpinyo, W.-L. Chao, B. Gong, F. Sha, Synthesized classifiers for zero-shot learning, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 5327–5336. https://doi.org/10.1109/CVPR.2016.575 |
[9] | W.-L. Chao, S. Changpinyo, B. Gong, F. Sha, An empirical study and analysis of generalized zero-shot learning for object recognition in the wild, European Conference on Computer Vision (ECCV), (2016), 52–68. https://doi.org/10.1007/978-3-319-46475-6_4 |
[10] | Q. Chen, W. Wang, K. Huang, F. Coenen, Zero-shot text classification via knowledge graph embedding for social media data, IEEE Internet Things, (2021). https://doi.org/10.1109/JIOT.2021.3093065 |
[11] | Y.-J. Chen, Y.-J. Chang, S.-C. Wen, Y. Shi, X. Xu, T.-Y. Ho, Q. Jia, M. Huang, et al., Zero-shot medical image artifact reduction, IEEE 17th International Symposium on Biomedical Imaging (ISBI), (2020), 862–866. https://doi.org/10.1109/ISBI45749.2020.9098566 |
[12] | J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, L. Fei-Fei, Imagenet: A large-scale hierarchical image database, 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2009), 248–255. https://doi.org/10.1109/CVPR.2009.5206848 |
[13] | M. Elhoseiny, B. Saleh, A. Elgammal, Write a classifier: Zero-shot learning using purely textual descriptions, Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2013), 2584–2591. https://doi.org/10.1109/ICCV.2013.321 |
[14] | M. Elhoseiny, Y. Zhu, H. Zhang, A. Elgammal, Link the head to the" beak": Zero shot learning from noisy text description at part precision, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 5640–5649. https://doi.org/10.1109/CVPR.2017.666 |
[15] | A. Farhadi, I. Endres, D. Hoiem, D. Forsyth, Describing objects by their attributes, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2009), 1778–1785. https://doi.org/10.1109/CVPR.2009.5206772 |
[16] | C. Finn, P. Abbeel, S. Levine, Model-agnostic meta-learning for fast adaptation of deep networks, Proceedings of the 34th International Conference on Machine Learning (ICML), 70 (2017), |
[17] | A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, M. Ranzato, T. Mikolov, Devise: A deep visual-semantic embedding model, Advances in neural information processing systems, 26 (2013), 2121–2129. |
[18] |
Y. Fu, T. M. Hospedales, T. Xiang, S. Gong, Transductive multi-view zero-shot learning, IEEE T. Pattern Anal., 37 (2015), 2332–2345. https://doi.org/10.1109/TPAMI.2015.2408354 doi: 10.1109/TPAMI.2015.2408354
![]() |
[19] | Z. Fu, T. Xiang, E. Kodirov, S. Gong, Zero-shot object recognition by semantic manifold distance, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2015), 2635–2644. https://doi.org/10.1109/CVPR.2015.7298879 |
[20] | I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial nets, Advances in neural information processing systems, (2014), 2672–2680. |
[21] | I. J. Goodfellow, J. Shlens, C. Szegedy, Explaining and harnessing adversarial examples, 3rd International Conference on Learning Representations (ICLR), (2015). |
[22] |
M. Gull, O. Arif, Generalized zero-shot learning using identifiable variational autoencoders, Expert Syst. Appl., 191 (2022), 116268. https://doi.org/10.1016/j.eswa.2021.116268 doi: 10.1016/j.eswa.2021.116268
![]() |
[23] | I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, A. C. Courville, Improved training of wasserstein gans, Advances in neural information processing systems, 30 (2017), 5767–5777. |
[24] | Y. Guo, G. Ding, J. Han, Y. Gao, Sitnet: Discrete similarity transfer network for zero-shot hashing., the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI), (2017), 1767–1773. https://doi.org/10.24963/ijcai.2017/245 |
[25] | Z. Han, Z. Fu, S. Chen, J. Yang, Contrastive embedding for generalized zero-shot learning, {IEEE/CVF} Conference on Computer Vision and Pattern Recognition (CVPR), (2021), 2371–2381. https://doi.org/10.1109/CVPR46437.2021.00240 |
[26] | K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 770–778. https://doi.org/10.1109/CVPR.2016.90 |
[27] | R. L. Hu, C. Xiong, R. Socher, Correction networks: Meta-learning for zero-shot learning, 2018. |
[28] | H. Huang, C. Wang, P. S. Yu, C.-D. Wang, Generative dual adversarial network for generalized zero-shot learning, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 801–810. https://doi.org/10.1109/CVPR.2019.00089 |
[29] | K. Huang, A. Hussain, Q.-F. Wang, R. Zhang, Deep Learning: Fundamentals, Theory and Applications, Springer, 2019. |
[30] | Y.-H. Hubert Tsai, L.-K. Huang, R. Salakhutdinov, Learning robust visual-semantic embeddings, Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2017), 3571–3580. https://doi.org/10.1109/ICCV.2017.386 |
[31] | D. Huynh, E. Elhamifar, Fine-grained generalized zero-shot learning via dense attribute-based attention, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 4483–4493. https://doi.org/10.1109/CVPR42600.2020.00454 |
[32] | D. Jayaraman, F. Sha, K. Grauman, Decorrelating semantic visual attributes by resisting the urge to share, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2014), 1629–1636. https://doi.org/10.1109/CVPR.2014.211 |
[33] | Z. Ji, Y. Fu, J. Guo, Y. Pang, Z. M. Zhang, Stacked semantics-guided attention model for fine-grained zero-shot learning, Advances in Neural Information Processing Systems, 31 (2018), 5998–6007. |
[34] |
Z. Ji, H. Wang, Y. Pang, L. Shao, Dual triplet network for image zero-shot learning, Neurocomputing, 373 (2020), 90–97. https://doi.org/10.1016/j.neucom.2019.09.062 doi: 10.1016/j.neucom.2019.09.062
![]() |
[35] | H. Jiang, G. Yang, K. Huang, R. Zhang, W-net: one-shot arbitrary-style chinese character generation with deep neural networks, International Conference on Neural Information Processing, (2018), 483–493. https://doi.org/10.1007/978-3-030-04221-9_43 |
[36] | H. Jiang, R. Wang, S. Shan, X. Chen, Transferable contrastive network for generalized zero-shot learning, Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2019), 9765–9774. https://doi.org/10.1109/ICCV.2019.00986 |
[37] | T. Joachims, Optimizing search engines using clickthrough data, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, (2002), 133–142. https://doi.org/10.1145/775047.775067 |
[38] | M. Kampffmeyer, Y. Chen, X. Liang, H. Wang, Y. Zhang, E. P. Xing, Rethinking knowledge graph propagation for zero-shot learning, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 11487–11496. https://doi.org/10.1109/CVPR.2019.01175 |
[39] | D. P. Kingma, M. Welling, Auto-encoding variational bayes, 2nd International Conference on Learning Representations (ICLR), (2014). |
[40] | T. N. Kipf, M. Welling, Semi-supervised classification with graph convolutional networks, 5th International Conference on Learning Representations (ICLR), OpenReview.net, (2017). |
[41] | E. Kodirov, T. Xiang, S. Gong, Semantic autoencoder for zero-shot learning, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 3174–3183. https://doi.org/10.1109/CVPR.2017.473 |
[42] | A. Kumar, P. Sattigeri, A. Balakrishnan, Variational inference of disentangled latent concepts from unlabelled observations, 6th International Conference on Learning Representations (ICLR), (2018). |
[43] | C. H. Lampert, H. Nickisch, S. Harmeling, Learning to detect unseen object classes by between-class attribute transfer, 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2009), 951–958. https://doi.org/10.1109/CVPR.2009.5206594 |
[44] |
C. H. Lampert, H. Nickisch, S. Harmeling, Attribute-based classification for zero-shot visual object categorization, IEEE T. Pattern Anal., 36 (2014), 453–465. https://doi.org/10.1109/TPAMI.2013.140 doi: 10.1109/TPAMI.2013.140
![]() |
[45] | H. Larochelle, D. Erhan, Y. Bengio, Zero-data learning of new tasks, Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, (2008), 646–651. |
[46] | J. Lei Ba, K. Swersky, S. Fidler, Predicting deep zero-shot convolutional neural networks using textual descriptions, Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2015), 4247–4255. https://doi.org/10.1109/ICCV.2015.483 |
[47] |
A. Li, Z. Lu, J. Guan, T. Xiang, L. Wang, J. Wen, Transferrable feature and projection learning with class hierarchy for zero-shot learning, Int. J. Comput. Vis., 128 (2020), 2810–2827. https://doi.org/10.1007/s11263-020-01342-x doi: 10.1007/s11263-020-01342-x
![]() |
[48] | J. Li, X. Lan, Y. Liu, L. Wang, N. Zheng, Compressing unknown images with product quantizer for efficient zero-shot classification, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 5463–5472. https://doi.org/10.1109/CVPR.2019.00561 |
[49] | J. Li, M. Jing, K. Lu, Z. Ding, L. Zhu, Z. Huang, Leveraging the invariant side of generative zero-shot learning, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 7402–7411. https://doi.org/10.1109/CVPR.2019.00758 |
[50] | X. Li, Z. Xu, K. Wei, C. Deng, Generalized zero-shot learning via disentangled representation, Proceedings of the AAAI Conference on Artificial Intelligence, 35 (2021), 1966–1974. |
[51] | Y. Li, J. Zhang, J. Zhang, K. Huang, Discriminative learning of latent features for zero-shot recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), 7463–7471. https://doi.org/10.1109/CVPR.2018.00779 |
[52] | T.-Y. Lin, A. RoyChowdhury, S. Maji, Bilinear cnn models for fine-grained visual recognition, Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2015), 1449–1457. https://doi.org/10.1109/ICCV.2015.170 |
[53] | G. Liu, J. Guan, M. Zhang, J. Zhang, Z. Wang, Z. Lu, Joint projection and subspace learning for zero-shot recognition, 2019 IEEE International Conference on Multimedia and Expo (ICME), (2019), 1228–1233. https://doi.org/10.1109/ICME.2019.00214 |
[54] | L. Liu, T. Zhou, G. Long, J. Jiang, X. Dong, C. Zhang, Isometric propagation network for generalized zero-shot learning, 9th International Conference on Learning Representations (ICLR), OpenReview.net, (2021). |
[55] | Y. Liu, Q. Gao, J. Li, J. Han, L. Shao, Zero shot learning via low-rank embedded semantic autoencoder, the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI), (2018), 2490–2496. https://doi.org/10.24963/ijcai.2018/345 |
[56] |
Y. Liu, X. Gao, Q. Gao, J. Han, L. Shao, Label-activating framework for zero-shot learning, Neural Networks, 121 (2020), 1–9. https://doi.org/10.1016/j.neunet.2019.08.023 doi: 10.1016/j.neunet.2019.08.023
![]() |
[57] | Y. Liu, J. Guo, D. Cai, X. He, Attribute attention for semantic disambiguation in zero-shot learning, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), (2019), 6698–6707. https://doi.org/10.1109/ICCV.2019.00680 |
[58] | Y. Liu, L. Zhou, X. Bai, Y. Huang, L. Gu, J. Zhou, T. Harada, Goal-oriented gaze estimation for zero-shot learning, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2021), 3794–3803. https://doi.org/10.1109/CVPR46437.2021.00379 |
[59] | Z. Liu, Y. Li, L. Yao, X. Wang, G. Long, Task aligned generative meta-learning for zero-shot learning, Proceedings of The Thirty-Fifth AAAI Conference on Artificial Intelligence, (2021). |
[60] | Y. Long, L. Liu, L. Shao, Attribute embedding with visual-semantic ambiguity removal for zero-shot learning, Proceedings of the British Machine Vision Conference 2016, BMVA Press, (2016). https://doi.org/10.5244/C.30.40 |
[61] | Y. Long, L. Liu, L. Shao, F. Shen, G. Ding, J. Han, From zero-shot learning to conventional supervised classification: Unseen visual data synthesis, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 1627–1636. https://doi.org/10.1109/CVPR.2017.653 |
[62] |
Y. Luo, X. Wang, F. Pourpanah, Dual VAEGAN: A generative model for generalized zero-shot learning, Appl. Soft Comput., 107 (2021), 107352. https://doi.org/10.1016/j.asoc.2021.107352 doi: 10.1016/j.asoc.2021.107352
![]() |
[63] | C. Lyu, K. Huang, H.-N. Liang, A unified gradient regularization family for adversarial examples, 2015 IEEE international conference on data mining, (2015), 301–309. https://doi.org/10.1109/ICDM.2015.84 |
[64] |
P. Ma, X. Hu, A variational autoencoder with deep embedding model for generalized zero-shot learning, Proceedings of the AAAI Conference on Artificial Intelligence, 34 (2020), 11733–11740. https://doi.org/10.1609/aaai.v34i07.6844 doi: 10.1609/aaai.v34i07.6844
![]() |
[65] | S. Min, H. Yao, H. Xie, C. Wang, Z.-J. Zha, Y. Zhang, Domain-aware visual bias eliminating for generalized zero-shot learning, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 12664–12673. https://doi.org/10.1109/CVPR42600.2020.01268 |
[66] | A. Mishra, S. Krishna Reddy, A. Mittal, H. A. Murthy, A generative model for zero shot learning using conditional variational autoencoders, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) workshops, (2018), 2188–2196. https://doi.org/10.1109/CVPRW.2018.00294 |
[67] |
T. M. Mitchell, S. V. Shinkareva, A. Carlson, K.-M. Chang, V. L. Malave, R. A. Mason, M. A. Just, Predicting human brain activity associated with the meanings of nouns, science, 320 (2008), 1191–1195. https://doi.org/10.1126/science.1152876 doi: 10.1126/science.1152876
![]() |
[68] | M. Nilsback, A. Zisserman, Automated flower classification over a large number of classes, Sixth Indian Conference on Computer Vision, Graphics & Image Processing, (2008), 722–729. https://doi.org/10.1109/ICVGIP.2008.47 |
[69] | M. Norouzi, T. Mikolov, S. Bengio, Y. Singer, J. Shlens, A. Frome, G. Corrado, J. Dean, Zero-shot learning by convex combination of semantic embeddings, 2nd International Conference on Learning Representations (ICLR) (eds. Y. Bengio and Y. LeCun), (2014). |
[70] | M. Palatucci, D. Pomerleau, G. E. Hinton, T. M. Mitchell, Zero-shot learning with semantic output codes, Advances in neural information processing systems, (2009), 1410–1418. |
[71] | D. Parikh, K. Grauman, Relative attributes, 2011 International Conference on Computer Vision (ICCV), (2011), 503–510. https://doi.org/10.1109/ICCV.2011.6126281 |
[72] | G. Patterson, J. Hays, SUN attribute database: Discovering, annotating, and recognizing scene attributes, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2012), 2751–2758. |
[73] | F. Pourpanah, M. Abdar, Y. Luo, X. Zhou, R. Wang, C. P. Lim, X.-Z. Wang, A review of generalized zero-shot learning methods, arXiv preprint arXiv: 2011.08641. |
[74] | R. Qiao, L. Liu, C. Shen, A. Van Den Hengel, Less is more: zero-shot learning from online textual documents with noise suppression, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 2249–2257. https://doi.org/10.1109/CVPR.2016.247 |
[75] | S. Ravi, H. Larochelle, Optimization as a model for few-shot learning, 5th International Conference on Learning Representations (ICLR), OpenReview.net, (2017). |
[76] | S. Reed, Z. Akata, H. Lee, B. Schiele, Learning deep representations of fine-grained visual descriptions, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 49–58. https://doi.org/10.1109/CVPR.2016.13 |
[77] | B. Romera-Paredes, P. Torr, An embarrassingly simple approach to zero-shot learning, International Conference on Machine Learning (ICML), (2015), 2152–2161. |
[78] |
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, et al., Imagenet large scale visual recognition challenge, Int. J. Comput. Vision, 115 (2015), 211–252. https://doi.org/10.1007/s11263-015-0816-y doi: 10.1007/s11263-015-0816-y
![]() |
[79] | C. Samplawski, E. Learned-Miller, H. Kwon, B. M. Marlin, Zero-shot learning in the presence of hierarchically coarsened labels, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, (2020), 926–927. https://doi.org/10.1109/CVPRW50498.2020.00471 |
[80] | M. B. Sariyildiz, R. G. Cinbis, Gradient matching generative networks for zero-shot learning, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 2168–2178. https://doi.org/10.1109/CVPR.2019.00227 |
[81] | E. Schonfeld, S. Ebrahimi, S. Sinha, T. Darrell, Z. Akata, Generalized zero-and few-shot learning via aligned variational autoencoders, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 8247–8255. https://doi.org/10.1109/CVPR.2019.00844 |
[82] |
T. Shermin, S. W. Teng, F. Sohel, M. Murshed, G. Lu, Integrated generalized zero-shot learning for fine-grained classification, Pattern Recogn., 122 (2022), 108246. https://doi.org/10.1016/j.patcog.2021.108246 doi: 10.1016/j.patcog.2021.108246
![]() |
[83] | I. Skorokhodov, M. Elhoseiny, Class normalization for (continual)? generalized zero-shot learning, 9th International Conference on Learning Representations (ICLR), OpenReview.net, 2021. |
[84] | J. Snell, K. Swersky, R. Zemel, Prototypical networks for few-shot learning, Advances in Neural Information Processing Systems, (2017), 4077–4087. |
[85] | R. Socher, M. Ganjoo, C. D. Manning, A. Y. Ng, Zero-shot learning through cross-modal transfer, Advances in Neural Information Processing Systems, (2013), 935–943. |
[86] | K. Sohn, H. Lee and X. Yan, Learning structured output representation using deep conditional generative models, Advances in neural information processing systems, 28 (2015), 3483–3491. |
[87] | J. Song, C. Shen, Y. Yang, Y. Liu, M. Song, Transductive unbiased embedding for zero-shot learning, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), 1024–1033. https://doi.org/10.1109/CVPR.2018.00113 |
[88] | F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, T. M. Hospedales, Learning to compare: Relation network for few-shot learning, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), 1199–1208. https://doi.org/10.1109/CVPR.2018.00131 |
[89] | C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, et al., Pattern Recognition (CVPR)}, (2015), 1–9. https://doi.org/10.1109/CVPR.2015.7298594 |
[90] | V. K. Verma, G. Arora, A. Mishra, P. Rai, Generalized zero-shot learning via synthesized examples, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), 4281–4289. https://doi.org/10.1109/CVPR.2018.00450 |
[91] | V. K. Verma, D. Brahma, P. Rai, Meta-learning for generalized zero-shot learning, The Thirty-Fourth AAAI Conference on Artificial Intelligence, (2020), 6062–6069. https://doi.org/10.1609/aaai.v34i04.6069 |
[92] | V. K. Verma, A. Mishra, A. Pandey, H. A. Murthy, P. Rai, Towards zero-shot learning with fewer seen class examples, IEEE Winter Conference on Applications of Computer Vision, (2021), 2240–2250. https://doi.org/10.1109/WACV48630.2021.00229 |
[93] | V. K. Verma, P. Rai, A simple exponential family framework for zero-shot learning, Joint European conference on machine learning and knowledge discovery in databases, (2017), 792–808. https://doi.org/10.1007/978-3-319-71246-8_48 |
[94] | O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, Matching networks for one shot learning, Advances in neural information processing systems, 29 (2016), 3630–3638. |
[95] | C. Wah, S. Branson, P. Welinder, P. Perona, S. Belongie, The caltech-ucsd birds-200-2011 dataset, California Institute of Technology, 2011. |
[96] | D. Wang, Y. Li, Y. Lin, Y. Zhuang, Relational knowledge transfer for zero-shot learning, Thirtieth AAAI Conference on Artificial Intelligence, (2016). |
[97] | F. Wang, M. Jiang, C. Qian, S. Yang, C. Li, H. Zhang, X. Wang, X. Tang, Residual attention network for image classification, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 3156–3164. https://doi.org/10.1109/CVPR.2017.683 |
[98] | J. Wang, B. Jiang, Zero-shot learning via contrastive learning on dual knowledge graphs, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), (2021), 885–892. https://doi.org/10.1109/ICCVW54120.2021.00104 |
[99] | W. Wang, V. W. Zheng, H. Yu, C. Miao, A survey of zero-shot learning: Settings, methods, and applications, ACM T. Intel. Syst. Tec. (TIST), 10 (2019), 1–37. |
[100] | X. Wang, Y. Ye, A. Gupta, Zero-shot recognition via semantic embeddings and knowledge graphs, {IEEE/CVF} Conference on Computer Vision and Pattern Recognition (CVPR), (2018), 6857–6866. https://doi.org/10.1109/CVPR.2018.00717 |
[101] |
Y. Wang, H. Zhang, Z. Zhang, Y. Long, Asymmetric graph based zero shot learning, Multim. Tools Appl., 79 (2020), 33689–33710. https://doi.org/10.1007/s11042-019-7689-y doi: 10.1007/s11042-019-7689-y
![]() |
[102] | J. Weston, S. Bengio, N. Usunier, Wsabie: Scaling up to large vocabulary image annotation, the Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI), (2011). |
[103] | Y. Xian, Z. Akata, G. Sharma, Q. Nguyen, M. Hein, B. Schiele, Latent embeddings for zero-shot classification, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 69–77. https://doi.org/10.1109/CVPR.2016.15 |
[104] | Y. Xian, C. H. Lampert, B. Schiele, Z. Akata, Zero-shot learning-a comprehensive evaluation of the good, the bad and the ugly, IEEE T. Pattern Anal., 41 (2018), 2251–2265. |
[105] | Y. Xian, T. Lorenz, B. Schiele, Z. Akata, Feature generating networks for zero-shot learning, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), 5542–5551. https://doi.org/10.1109/CVPR.2018.00581 |
[106] | Y. Xian, S. Sharma, B. Schiele, Z. Akata, f-vaegan-d2: A feature generating framework for any-shot learning, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 10275–10284. https://doi.org/10.1109/CVPR.2019.01052 |
[107] |
C. Xie, H. Xiang, T. Zeng, Y. Yang, B. Yu, Q. Liu, Cross knowledge-based generative zero-shot learning approach with taxonomy regularization, Neural Networks, 139 (2021), 168–178. https://doi.org/10.1016/j.neunet.2021.02.009 doi: 10.1016/j.neunet.2021.02.009
![]() |
[108] | G.-S. Xie, L. Liu, X. Jin, F. Zhu, Z. Zhang, J. Qin, Y. Yao, L. Shao, Attentive region embedding network for zero-shot learning, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 9384–9393. https://doi.org/10.1109/CVPR.2019.00961 |
[109] | G.-S. Xie, L. Liu, F. Zhu, F. Zhao, Z. Zhang, Y. Yao, J. Qin, L. Shao, Region graph embedding network for zero-shot learning, European Conference on Computer Vision (ECCV), (2020), 562–580. https://doi.org/10.1007/978-3-030-58548-8_33 |
[110] | B. Xu, Z. Zeng, C. Lian, Z. Ding, Semi-supervised low-rank semantics grouping for zero-shot learning, IEEE Trans. Image Process., 30 (2021), 2207–2219. https://doi.org/10.1109/TIP.2021.3050677 |
[111] |
T. Xu, Y. Zhao, X. Liu, Dual generative network with discriminative information for generalized zero-shot learning, Complexity, 2021 (2021), 6656797:1–6656797:11. https://doi.org/10.1155/2021/6656797 doi: 10.1155/2021/6656797
![]() |
[112] | W. Xu, Y. Xian, J. Wang, B. Schiele, Z. Akata, Attribute prototype network for zero-shot learning, Advances in Neural Information Processing Systems, 33 (2020), 21969–21980. |
[113] | G. Yang, K. Huang, R. Zhang, J. Y. Goulermas, A. Hussain, Self-focus deep embedding model for coarse-grained zero-shot classification, International Conference on Brain Inspired Cognitive Systems, (2019), 12–22. |
[114] | G. Yang, K. Huang, R. Zhang, J. Y. Goulermas, A. Hussain, Inductive generalized zero-shot learning with adversarial relation network, Joint European Conference on Machine Learning and Knowledge Discovery in Databases, (2020), 724–739. |
[115] | B. Yao, A. Khosla, L. Fei-Fei, Combining randomization and discrimination for fine-grained image categorization, CVPR 2011, (2011), 1577–1584. https://doi.org/10.1109/CVPR.2011.5995368 |
[116] | Z. Ye, F. Hu, F. Lyu, L. Li, K. Huang, Disentangling semantic-to-visual confusion for zero-shot learning, IEEE T. Multimedia, (2021). doi: 10.1109/TMM.2021.3089017. https://doi.org/10.1109/TMM.2021.3089017 |
[117] | Z. Ye, F. Lyu, L. Li, Q. Fu, J. Ren, F. Hu, Sr-gan: Semantic rectifying generative adversarial network for zero-shot learning, 2019 IEEE International Conference on Multimedia and Expo (ICME), (2019), 85–90. https://doi.org/10.1109/ICME.2019.00023 |
[118] |
Y. Yu, Z. Ji, J. Guo, Z. Zhang, Zero-shot learning via latent space encoding, IEEE T. Cybernetics, 49 (2018), 3755–3766. https://doi.org/10.1109/TCYB.2018.2850750 doi: 10.1109/TCYB.2018.2850750
![]() |
[119] | F. Zhang, G. Shi, Co-representation network for generalized zero-shot learning, International Conference on Machine Learning (ICML), (2019), 7434–7443. |
[120] |
H. Zhang, Y. Long, Y. Guan, L. Shao, Triple verification network for generalized zero-shot learning, IEEE T. Image Process., 28 (2019), 506–517. https://doi.org/10.1109/TIP.2018.2869696 doi: 10.1109/TIP.2018.2869696
![]() |
[121] | L. Zhang, T. Xiang, S. Gong, Learning a deep embedding model for zero-shot learning, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 2021–2030. https://doi.org/10.1109/CVPR.2017.321 |
[122] | Z. Zhang, V. Saligrama, Zero-shot learning via semantic similarity embedding, Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2015), 4166–4174. https://doi.org/10.1109/ICCV.2015.474 |
[123] | H. Zheng, J. Fu, T. Mei, J. Luo, Learning multi-attention convolutional neural network for fine-grained image recognition, Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2017), 5209–5217. https://doi.org/10.1109/ICCV.2017.557 |
[124] | Y. Zhu, J. Xie, B. Liu, A. Elgammal, Learning feature-to-feature translator by alternating back-propagation for generative zero-shot learning, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), (2019), 9844–9854. https://doi.org/10.1109/ICCV.2019.00994 |
[125] | Y. Zhu, J. Xie, Z. Tang, X. Peng, A. Elgammal, Semantic-guided multi-attention localization for zero-shot learning, Advances in Neural Information Processing Systems, 32 (2019), 14917–14927. |