In order to solve the problem that deep learning-based flower image classification methods lose more feature information in the early feature extraction process, and the model takes up more storage space, a new lightweight neural network model based on multi-scale feature fusion and attention mechanism is proposed in this paper. First, the AlexNet model is chosen as the basic framework. Second, a multi-scale feature fusion module (MFFM) is used to replace the shallow single-scale convolution. MFFM, which contains three depthwise separable convolution branches with different sizes, can fuse features with different scales and reduce the feature loss caused by single-scale convolution. Third, two layers of improved Inception module are first added to enhance the extraction of deep features, and a layer of hybrid attention module is added to strengthen the focus of the model on key information at a later stage. Finally, the flower image classification is completed using a combination of global average pooling and fully connected layers. The experimental results demonstrate that our lightweight model has fewer parameters, takes up less storage space and has higher classification accuracy than the baseline model, which helps to achieve more accurate flower image recognition on mobile devices.
Citation: Zhigao Zeng, Cheng Huang, Wenqiu Zhu, Zhiqiang Wen, Xinpan Yuan. Flower image classification based on an improved lightweight neural network with multi-scale feature fusion and attention mechanism[J]. Mathematical Biosciences and Engineering, 2023, 20(8): 13900-13920. doi: 10.3934/mbe.2023619
[1] | Weimin Sheng, Shucan Xia . Interior curvature bounds for a type of mixed Hessian quotient equations. Mathematics in Engineering, 2023, 5(2): 1-27. doi: 10.3934/mine.2023040 |
[2] | Bin Deng, Xinan Ma . Gradient estimates for the solutions of higher order curvature equations with prescribed contact angle. Mathematics in Engineering, 2023, 5(6): 1-13. doi: 10.3934/mine.2023093 |
[3] | Filippo Gazzola, Gianmarco Sperone . Remarks on radial symmetry and monotonicity for solutions of semilinear higher order elliptic equations. Mathematics in Engineering, 2022, 4(5): 1-24. doi: 10.3934/mine.2022040 |
[4] | Anoumou Attiogbe, Mouhamed Moustapha Fall, El Hadji Abdoulaye Thiam . Nonlocal diffusion of smooth sets. Mathematics in Engineering, 2022, 4(2): 1-22. doi: 10.3934/mine.2022009 |
[5] | Bruno Bianchini, Giulio Colombo, Marco Magliaro, Luciano Mari, Patrizia Pucci, Marco Rigoli . Recent rigidity results for graphs with prescribed mean curvature. Mathematics in Engineering, 2021, 3(5): 1-48. doi: 10.3934/mine.2021039 |
[6] | Miyuki Koiso . Stable anisotropic capillary hypersurfaces in a wedge. Mathematics in Engineering, 2023, 5(2): 1-22. doi: 10.3934/mine.2023029 |
[7] | Márcio Batista, Giovanni Molica Bisci, Henrique de Lima . Spacelike translating solitons of the mean curvature flow in Lorentzian product spaces with density. Mathematics in Engineering, 2023, 5(3): 1-18. doi: 10.3934/mine.2023054 |
[8] | Qiang Guang, Qi-Rui Li, Xu-Jia Wang . Flow by Gauss curvature to the $ L_p $ dual Minkowski problem. Mathematics in Engineering, 2023, 5(3): 1-19. doi: 10.3934/mine.2023049 |
[9] | Giuseppe Gaeta, Roma Kozlov, Francesco Spadaro . Asymptotic symmetry and asymptotic solutions to Ito stochastic differential equations. Mathematics in Engineering, 2022, 4(5): 1-52. doi: 10.3934/mine.2022038 |
[10] | Luigi Montoro, Berardino Sciunzi . Qualitative properties of solutions to the Dirichlet problem for a Laplace equation involving the Hardy potential with possibly boundary singularity. Mathematics in Engineering, 2023, 5(1): 1-16. doi: 10.3934/mine.2023017 |
In order to solve the problem that deep learning-based flower image classification methods lose more feature information in the early feature extraction process, and the model takes up more storage space, a new lightweight neural network model based on multi-scale feature fusion and attention mechanism is proposed in this paper. First, the AlexNet model is chosen as the basic framework. Second, a multi-scale feature fusion module (MFFM) is used to replace the shallow single-scale convolution. MFFM, which contains three depthwise separable convolution branches with different sizes, can fuse features with different scales and reduce the feature loss caused by single-scale convolution. Third, two layers of improved Inception module are first added to enhance the extraction of deep features, and a layer of hybrid attention module is added to strengthen the focus of the model on key information at a later stage. Finally, the flower image classification is completed using a combination of global average pooling and fully connected layers. The experimental results demonstrate that our lightweight model has fewer parameters, takes up less storage space and has higher classification accuracy than the baseline model, which helps to achieve more accurate flower image recognition on mobile devices.
Dedicated to Neil Trudinger on his 80th birthday with friendship and admiration.
H. Hopf established in [3] that an immersion of a topological 2-sphere in R3 with constant mean curvature must be a standard sphere. He also made the conjecture that the conclusion holds for all immersed connected closed hypersurfaces in Rn+1 with constant mean curvature. A. D. Alexandrov proved in [1] that if M is an embedded connected closed hypersurface with constant mean curvature, then M must be a standard sphere. If M is immersed instead of embedded, the conclusion does not hold in general, as shown by W.-Y. Hsiang in [4] for n≥3 and by Wente in [15] for n=2. A. Ros in [13] gave a different proof for the theorem of Alexandrov making use of the variational properties of the mean curvature.
In this note, we give exposition to some results in [5,6,7,8,9]. It is suggested that the reader read the introductions of [6,7,9].
Throughout the paper M is a smooth compact connected embedded hypersurface in Rn+1, k(X)=(k1(X),⋯,kn(X)) denotes the principal curvatures of M at X with respect to the inner normal, and the mean curvature of M is
H(X):=1n[k1(X)+⋯+kn(X)]. |
We use G to denote the open bounded set bounded by M.
Li proved in[5] the following result saying that if the mean curvature H:M→R has a Lipschitz extension K:Rn+1→R which is monotone in the Xn+1 direction, then M is symmetric about a hyperplane Xn+1=c.
Theorem 1.1. ([5]) Let M be a smooth compact connected embeded hypersurface without boundary embedded in Rn+1, and let K be a Lipschitz function in Rn+1 satisfying
K(X′,B)≤K(X′,A),∀ X′∈Rn, A≤B. | (1.1) |
Suppose that at each point X of M the mean curvature H(X) equals K(X). Then M is symmetric about a hyperplane Xn+1=c.
In [5], K was assumed to be C1 for the above result, but the proof there only needs K being Lipschitz.
Li and Nirenberg then considered in [6] and [7] the more general question in which the condition H(X)=K(X) with K satisfying (1.1) is replaced by the weaker, more natural, condition:
Main Assumption. For any two points (X′,A),(X′,B)∈M satisfying A≤B and that {(X′,θA+(1−θ)B):0≤θ≤1} lies in ¯G, we have
H(X′,B)≤H(X′,A). | (1.2) |
They showed in [6] that this assumption alone is not enough to guarentee the symmetry of M about some hyperplane Xn+1=c. The mean curvature H:M→R of the counterexample constructed in [6, Figure 4] has a monotone extension K:Rn+1→R which is Cα for every 0<α<1, but fails to be Lipschitz. The counterexample actually satisfies (1.2) with an equality. They also constructed a counterexample [6, Section 6] showing that the inequality (1.2) does not imply a pairwise equality.
A conjecture was made in [7] after the introduction of
Condition S. M stays on one side of any hyperplane parallel to the Xn+1 axis that is tangent to M.
Conjecture 1. ([7]) Any smooth compact connected embedded hypersurface M in Rn+1 satisfying the Main Assumption and Condition S must be symmetric about a hyperplace Xn+1=c.
The conjecture for n=1 was proved in [6]. For n≥2, they introduced the following condition:
Condition T. Every line parallel to the Xn+1-axis that is tangent to M has contact of finite order.
Note that if M is real analytic then Condition T is automatically satisfied.
They proved in [7, Theorem 1] that M is symmetric about a hyperplane Xn+1=c under the Main Assumption, Condition S and T, and a local convexity condition near points where the tangent planes are parallel to the Xn+1-axis. For convex M, their result is
Theorem 1.2. ([7]) Let M be a smooth compact convex hypersurface in Rn+1 satisfying the Main Assumption and Condition T. Then M must be symmetric about a hyperplane Xn+1=c.
The theorem of Alexandrov is more general in that one can replace the mean curvature by a wide class of symmetric functions of the principal curvatures. Similarly, Theorems 1.1 and 1.2 (as well as the more general [7, Theorem 1]) still hold when the mean curvature function is replaced by more general curvature functions.
Consider a triple (M,Γ,g): Let M be a compact connected C2 hypersurface without boundary embedded in Rn+1, and let g(k1,⋯,kn) be a C3 function, symmetric in (k1,⋯,kn), defined in an open convex neighborhood Γ of {(k1(X),⋯,kn(X)) | X∈M}, and satisfy
∂g∂ki(k)>0, 1≤i≤n and ∂2g∂ki∂kj(k)ηiηj≤0,∀ k∈Γ and η∈Rn. | (1.3) |
For convex M, their result ([7, Theorem 2]) is as follows.
Theorem 1.3. ([7]) Let the triple (M,Γ,g) satisfy (1.3). In addition, we assume that M is convex and satisfies Condition T and the Main Assumption with inequality (1.2) replaced by
g(k(X′,B))≤g(k(X′,A)). | (1.4) |
Then M must be symmetric about a hyperplane Xn+1=c.
For 1≤m≤n, let
σm(k1,⋯,kn)=∑1≤i1<⋯<im≤nki1⋯kim |
be the m-th elementary symmetric function, and let
gm:=(σm)1m. |
It is known that g=gm satisfies the above properties in
Γm:={(k1,⋯,kn)∈Rn | σj(k1,⋯,kn)>0 for 1≤j≤m}. |
It is known that Γ1={k∈Rn | k1+⋯+kn>0}, Γn={k∈Rn | k1,⋯,kn>0}, Γm+1⊂Γm, and Γm is the connected component of {k∈Rn | σm(k)>0} containing Γn.
The method of proof of Theorems 1.2 and 1.3 (as well as the more general [7, Theorems 1 and 2]) begins as in that of the theorem of Alexandrov, using the method of moving planes. Then, as indicated in the introduction of [6], one is led to the need for variations of the classical Hopf Lemma. The Hopf Lemma is a local result. The needed variant of the Hopf Lemma to prove Theorem 1.2 (and Conjecture 1) was raised as an open problem ([7, Open Problem 2]) which remains open. The proof of Theorems 1.2 and 1.3 (as well as the more general [7, Theorems 1 and 2]) was based on the maximum principle, but also used a global argument.
In a recent paper [9], Li, Yan and Yao proved Conjecture 1 using a method different from that of [6] and [7], exploiting the variational properties of the mean curvature. In fact, they proved the symmetry result under a slightly weaker assumption than Condition S:
Condition S'. There exists some constant r>0, such that for every ¯X=(¯X′,¯Xn+1)∈M with a horizontal unit outer normal (denote it by ˉν=(ˉν′,0)), the vertical cylinder |X′−(¯X′+rˉν′)|=r has an empty intersection with G. (G is the bounded open set in Rn+1 bounded by the hypersurface M.)
Theorem 1.4. ([9]) Let M be a compact connected C2 hypersurface without boundary embedded in Rn+1, which satisfies both the Main Assumption and Condition S'. Then M must be symmetric about a hyperplane Xn+1=c.
Here are two conjectures, in increasing strength.
Conjecture 2. For n≥2 and 2≤m≤n, let M be a compact connected C2 hypersurface without boundary embedded in Rn+1 satisfying Condition S (or the slightly weaker Condition S') and {(k1(X),⋯,kn(X)) | X∈M}⊂Γm. We assume that M satisfies the Main Assumption with inequality (1.2) replaced by
σm(k(X′,B))≤σm(k(X′,A)). | (1.5) |
Then M must be symmetric about a hyperplane Xn+1=c.
The next one is for more general curvature functions.
Conjecture 3. For n≥2, let the triple (M,Γ,g) satisfy (1.3). In addition, we assume that M satisfies Condition S (or the slightly weaker Condition S') and the Main Assumption with inequality (1.2) replaced by (1.4). Then M must be symmetric about a hyperplane Xn+1=c.
The above two conjectures are open even for convex M.
Conjecture 2 can be approached by two ways. One is by the method of moving planes, and this leads to the study of variations of the Hopf Lemma. Such variations of the Hopf Lemma are of its own interest. A number of open problems and conjectures on such variations of the Hopf Lemma has been discussed in [6,7,8]. For related works, see [11] and [14]. We will give some discussion on this in Section 1.
Conjecture 2 can also be approached by using the variational properties of the higher order mean curvature (i.e., the σm-curvature). If the answer to Conjecture 2 is affirmative, then the inequality in (1.5) must be an equality. This curvature equality was proved in the following lemma, using the variational properties of the σm-curvature:
Lemma 1. (Y. Y. Li, X. Yan and Y. Yao) For n≥2 and 2≤m≤n, let M be a compact connected C2 hypersurface without boundary embedded in Rn+1 satisfying Condition S'. We assume that M satisfies the Main Assumption, with inequality (1.2) replaced by (1.5). Then (1.5) must be an equality for every pair of points.
The proof of Theorem 1.4 and Lemma 1 will be sketched in Section 2.
We have discussed in the above symmetry properties of hypersurfaces in the Euclidean space. It is also interesting to study symmetry properties of hypersurfaces under ordered curvature assumptions in the hyperbolic space, including the study of the counter part of Theorem 1.1, Theorem 1.4, and Conjecture 2 in the hyperbolic space. Extensions of the Alexandrov-Bernstein theorems in the hyperbolic space were given by Do Carmo and Lawson in [2]; see also Nelli [10] for a survey on Alexandrov-Bernstein-Hopf theorems.
Let
Ω={(t,y) | y∈Rn−1,|y|<1,0<t<1}, | (2.1) |
u,v∈C∞(¯Ω), |
u≥v≥0,in Ω, |
u(0,y)=v(0,y),∀ |y|<1;u(0,0)=v(0,0)=0, |
ut(0,0)=0, |
ut>0,in Ω. |
We use ku(t,y)=(ku1(t,y),⋯,kn(t,y)) to denote the principal curvatures of the graph of u at (t,y). Similarly, kv=(kv1,⋯,kvn) denotes the principal curvatures of the graph of v.
Here are two plausible variations of the Hopf Lemma.
Open Problem 1. For n≥2 and 1≤m≤n, let u and v satisfy the above. Assume
{whenever u(t,y)=v(s,y),0<s<1,|y|<1, then thereσm(ku)(t,y)≤σm(kv)(s,y). |
Is it true that either
u≡v near (0,0) | (2.2) |
or
v≡0 near (0,0)? | (2.3) |
A weaker version is
Open Problem 2. In addition to the assumption in Open Problem 1, we further assume that
w(t,y):={v(t,y),t≥0,|y|<1u(−t,y),t<0,|y|<1 is C∞ in {(t,y) | |t|<1,|y|<1}. |
Is it true that either (2.2) or (2.3) holds?
Open Problems 1 and 2 for m=1 are exactly the same as [7, Open Problems 1 and 2], where it was pointed out that an affirmative answer to Open Problem 2 for m=1 would yield a proof of Conjecture 1 by modification of the arguments in [6,7]. This applies to 2≤m≤n as well: An affirmative answer to Open Problem 2 for some 2≤m≤n would yield a proof of Conjecture 2 (with Condition S) for the m.
As mentioned earlier, the answer to Open Problem 1 for n = 1 is yes, and was proved in [6]. For n≥2, a number of conjectures and open problems on plausible variations to the Hopf Lemma were given in [6,7,8]. The study of such variations of the Hopf Lemma can first be made for the Laplace operator instead of the curvature operators. The following was studied in [8].
Let u≥v be in C∞(¯Ω) where Ω is given by (2.1). Assume that
u>0, v>0, ut>0in Ω |
and
u(0,y)=0for |y|<1. |
We impose a main condition for the Laplace operator:
whenever u(t,y)=v(s,y) for 0<t≤s<1,there Δu(t,y)≤Δv(s,y). |
Under some conditions we wish to conclude that
u≡v in Ω. | (2.4) |
The following two conjectures, in decreasing strength, were given in [8].
Conjecture 4. Assume, in addition to the above, that
ut(0,0)=0. | (2.5) |
Then (2.4) holds:
u≡v in Ω. |
Conjecture 5. In addition to (2.5) assume that
u(t,0) and v(t,0) vanish at t=0 of finite order. |
Then
u≡v in Ω. |
Partial results were given in [8] concerning these conjectures. On the other hand, the conjectures remain largely open.
Theorem 1.4 was proved in [9] by making use of the variational properties of the mean curvature operator. We sketch the proof of Theorem 1.4 below, see [9] for details.
For any smooth, closed hypersurface M embeded in Rn+1, let V:Rn+1→Rn+1 be a smooth vector field. Consider, for |t|<1,
M(t):={x+tV(x) | x∈M}, | (3.1) |
and
S(t):=∫M(t)dσ=area of M(t). |
It is well known that
ddtS(t)|t=0=−∫MV(x)⋅ν(x)H(x)dσ(x), | (3.2) |
where H(x) is the mean curvature of M at x with respect to the inner unit normal ν.
Define the projection map π:(x′,xn+1)→x′, and set R:=π(M).
Condition S' assures that ν(ˉx), ˉx∈M, is horizontal iff ˉx′∈∂R; ∂R is C1,1 (with C1,1 normal under control); and
M=M1∪M2∪ˆM, |
where M1,M2 are respectively graphs of functions f1,f2:R∘→R, f1,f2∈C2(R∘),f1>f2 in R∘, and ˆM:={(x′,xn+1)∈M | x′∈∂R}≡M∩π−1(∂R). Note that f1,f2 are not in C0(R) in general.
Lemma 2.
H(x′,f1(x′))=H(x′,f2(x′))∀ x′∈R∘. | (3.3) |
Proof. Take V(x)=en+1=(0,...,0,1), and let M(t) and S(t) be defined as above with this choice of V(x). Clearly, S(t) is independent of t. So we have, using (3.2) and the order assumption on the mean curvature, that
0=ddtS(t)|t=0=−2∑i=1∫Mien+1⋅ν(x)H(x)dσ(x)=−∫R∘[H(x′,f1(x′))−H(x′,f2(x′))]dx′≥0. | (3.4) |
Using again the order assumption on the mean curvature we obtain The curvature equality (3.3).
For any v∈C∞(Rn), let V(x):=v(x′)en+1, and let M(t) and S(t) be defined as above with this choice of V(x). We have, using (3.2) and (3.3), that
0=ddtS(t)|t=0=−2∑i=1∫Miv(x′)en+1⋅ν(x)H(x)dσ(x)=−∫R∘v(x′)[H(x′,f1(x′))−H(x′,f2(x′))]dx′=0. | (3.5) |
Theorem 1.4 is proved by contradition as follows: If M is not symmetric about a hyperplane, then ∇(f1+f2) is not identically zero. We will find a particular V(x)=v(x′)en+1, v∈C2loc(R∘), to make
ddtS(t)|t=0≠0, |
which contradicts to (3.5).
Write
S(t)=2∑i=1∫R∘√1+|∇[fi(x′)+v(x′)]t|2dx′+ˆS, |
where ˆS, the area of the vertical part of M, is independent of t (since v is zero near ∂R, so the vertical part of M is not moved).
A calculation gives
ddtS(t)|t=0=∫R∘2∑i=1[∇A(∇f1(x′))−∇A(−∇f2(x′))]⋅∇v(x′)dx′, |
where
A(q):=√1+|q|2, q∈Rn. |
We know that
∇A(q)=q√1+|q|2and∇2A(q)≥(1+|q|2)−3/2I>0 ∀q. |
So [∇A(q1)−∇A(q2)]⋅(q1−q2)>0 for any q1≠q2.
If ∇(f1+f2)∈C2loc(R∘)u∖{0}, we would take v=∇(f1+f2) and obtain
ddtS(t)|t=0=∫R∘[∇A(∇f1(x′))−∇A(−∇f2(x′))]⋅∇v(x′)dx′=∫R∘[∇A(∇f1(x′))−∇A(−∇f2(x′))]⋅[∇f1(x′)+∇f2(x′)]dx′>0. |
In general, ∇(f1+f2) would not be in C2loc(R∘). It turns out that Condition S' allows us to do a smooth cutoff near ∂R, and conclude the proof. We skip the crucial details, which can be found in the last few pages of [9].
Now we give the
Proof of Lemma 1. The proof is similar to that of Lemma 2, see also the proof of [9, Proposition 3] for more details. We still take V(X)=en+1 and let M(t) be as in (3.1). Consider
Sm−1(t):=∫M(t)σm−1(x)dσ. |
Clearly, Sm−1(t) is independent of t.
The variational properties of higher order curvature [12, Theorem B] gives
ddtSm−1(t)|t=0=−m∫MV(x)⋅ν(x)σm(x)dσ(x), |
thus the same argument as (3.4) yields
0=ddtSm−1(t)|t=0=−m∫MV(x)⋅ν(x)σm(x)dσ(x)=−∫R∘[σm(x′,f1(x′))−σm(x′,f2(x′))]dx′≥0. |
We deduce from the above, using the curvature inequality (1.5), that the equality in (1.5) must hold for every pair of points. Lemma 1 is proved.
Partially supported by NSF grants DMS-1501004, DMS-2000261, and Simons Fellows Award 677077.
The author declares no conflict of interest.
[1] |
H. Hiary, H. Saadeh, M. Saadeh, M. Yaqub, Flower classification using deep convolutional neural networks, IET Comput. Vision, 12 (2018), 855–862. https://doi.org/10.1049/iet-cvi.2017.0155 doi: 10.1049/iet-cvi.2017.0155
![]() |
[2] | M. E. Nilsback, A. Zisserman, Automated flower classification over a large number of classes, in 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, (2008), 722–729. https://doi.org/10.1109/ICVGIP.2008.47 |
[3] | B. Fernando, E. Fromont, D. Muselet, M. Sebban, Discriminative feature fusion for image classification, in 2012 IEEE Conference on Computer Vision and Pattern Recognition, (2012), 3434–3441. https://doi.org/10.1109/CVPR.2012.6248084 |
[4] | A. Angelova, S. Zhu, Efficient object detection and segmentation for fine-grained recognition, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2013), 811–818. |
[5] | H. M. Zawbaa, M. Abbass, S. H. Basha, M. Hazman, A. E. Hassenian, An automatic flower classification approach using machine learning algorithms, in 2014 International Conference on Advances in Computing, Communications and Informatics (ICACCI), (2014), 895–901. https://doi.org/10.1109/ICACCI.2014.6968612 |
[6] |
S. Inthiyaz, B. Madhav, P. Kishore, Flower segmentation with level sets evolution controlled by colour, texture and shape features, Cogent Eng., 4 (2017). https://doi.org/10.1080/23311916.2017.1323572 doi: 10.1080/23311916.2017.1323572
![]() |
[7] |
G. E. Hinton, R. R. Salakhutdinov, Reducing the dimensionality of data with neural networks, Science, 313 (2006), 504–507. https://doi.org/10.1126/science.1127647 doi: 10.1126/science.1127647
![]() |
[8] |
A. Krizhevsky, I. Sutskever, G. E. Hinton, ImageNet classification with deep convolutional neural networks, Commun. ACM, 60 (2017), 84–90. https://doi.org/10.1145/3065386 doi: 10.1145/3065386
![]() |
[9] | K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint, (2014), arXiv: 1409.1556. https://doi.org/10.48550/arXiv.1409.1556 |
[10] | C. Szegedy, W. Liu, Y. Q. Jia, P. Sermanet, S. Reed, D. Anguelov, et al., Going deeper with convolutions, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2015), 1–9. |
[11] |
F. Marzialetti, S. Giulio, M. Malavasi, M. G. Sperandii, A. T. R. Acosta, M. L. Carranza, Capturing coastal dune natural vegetation types using a phenology-based mapping approach: The potential of sentinel-2, Remote Sens., 11 (2019), 1506. https://doi.org/10.3390/rs11121506 doi: 10.3390/rs11121506
![]() |
[12] |
M. Ragab, A. E. Abouelregal, H. F. AlShaibi, R. A. Mansouri, Heat transfer in biological spherical tissues during hyperthermia of magnetoma, Biology, 10 (2021), 1259. https://doi.org/10.3390/biology10121259 doi: 10.3390/biology10121259
![]() |
[13] |
M. Versaci, G. Angiulli, P. Crucitti, D. D. Carlo, F. Laganà, D. Pellicanò, et al., A fuzzy similarity-based approach to classify numerically simulated and experimentally detected carbon fiber-reinforced polymer plate defects, Sensors, 22 (2022), 4232. https://doi.org/10.3390/s22114232 doi: 10.3390/s22114232
![]() |
[14] | Y. Y. Liu, F. Tang, D. W. Zhou, Y. P. Meng, W. M. Dong, Flower classification via convolutional neural network, in 2016 IEEE International Conference on Functional-Structural Plant Growth Modeling, Simulation, Visualization and Applications (FSPMA), (2016), 110–116. https://doi.org/10.1109/FSPMA.2016.7818296 |
[15] | S. Cao, B. Song, Visual attentional-driven deep learning method for flower recognition, Math. Biosci. Eng., 18 (2021), 1981–1991. |
[16] | X. L. Xia, C. Xu, B. Nan, Inception-v3 for flower classification, in 2017 2nd International Conference on Image, Vision and Computing (ICIVC), (2017), 783–787. https://doi.org/10.1109/ICIVC.2017.7984661 |
[17] |
J. H. Qin, W. Y. Pan, X. X. Xiang, Y. Tan, G. M. Hou, A biological image classification method based on improved CNN, Ecol. Inf., 58 (2020), 101093. https://doi.org/10.1016/j.ecoinf.2020.101093 doi: 10.1016/j.ecoinf.2020.101093
![]() |
[18] | M. Simon, E. Rodner, Neural activation constellations: Unsupervised part model discovery with convolutional networks, in Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2015), 1143–1151. |
[19] |
M. Cıbuk, U. Budak, Y. Guo, M. C. Ince, A. Sengur, Efficient deep features selections and classification for flower species recognition, Measurement, 137 (2019), 7–13. https://doi.org/10.1016/j.measurement.2019.01.041 doi: 10.1016/j.measurement.2019.01.041
![]() |
[20] |
K. Bae, J. Park, J. Lee, Y. Lee, C. Lim, Flower classification with modified multimodal convolutional neural networks, Expert Syst. Appl., 159 (2020), 113455. https://doi.org/10.1016/j.eswa.2020.113455 doi: 10.1016/j.eswa.2020.113455
![]() |
[21] |
C. Pang, W. H. Wang, R. S. Lan, Z. Shi, X. N. Luo, Bilinear pyramid network for flower species categorization, Multimedia Tools Appl., 80 (2021), 215–225. https://doi.org/10.1007/s11042-020-09679-8 doi: 10.1007/s11042-020-09679-8
![]() |
[22] |
C. Liu, L. Huang, Z. Q. Wei, W. F. Zhang, Subtler mixed attention network on fine-grained image classification, Appl. Intell., 51 (2021), 7903–7916. https://doi.org/10.1007/s10489-021-02280-y doi: 10.1007/s10489-021-02280-y
![]() |
[23] | X. Guan, G. Q. Wang, X. Xu, Y. Bin, Learning hierarchal channel attention for fine-grained visual classification, in Proceedings of the 29th ACM International Conference on Multimedia, (2021), 5011–5019. https://doi.org/10.1145/3474085.3475184 |
[24] | M. Sandler, A. Howard, M. L. Zhu, A. Zhmoginov, L. C. Chen, Mobilenetv2: Inverted residuals and linear bottlenecks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), 4510–4520. |
[25] | S. Ioffe, C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, in Proceedings of the 32nd International Conference on Machine Learning, (2015), 448–456. |
[26] | C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna, Rethinking the inception architecture for computer vision, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 2818–2826. |
[27] | K. M. He, X. Y. Zhang, S. Q. Ren, J. Sun, Deep residual learning for image recognition, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 770–778. |
[28] | C. Szegedy, S. Ioffe, V. Vanhoucke, A. A. Alemi, Inception-v4, inception-resnet and the impact of residual connections on learning, in Proceedings of the AAAI Conference on Artificial Intelligence, (2017), 4278–4284. https://doi.org/10.1609/aaai.v31i1.11231 |
[29] |
S. Chaudhari, V. Mithal, G. Polatkan, R. Ramanath, An attentive survey of attention models, ACM Trans. Intell. Syst. Technol., 12 (2021), 1–32. https://doi.org/10.1145/3465055 doi: 10.1145/3465055
![]() |
[30] | J. Hu, L. Shen, G. Sun, Squeeze-and-excitation networks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), 7132–7141. |
[31] | S. Woo, J. Park, J. Y. Lee, I. S. Kweon, Cbam: Convolutional block attention module, in Proceedings of the European Conference on Computer Vision (ECCV), (2018), 3–19. |
[32] | I. Goodfellow, D. W. Farley, M. Mirza, A. Courville, Y. Bengio, Maxout networks, in Proceedings of the 30th International Conference on Machine Learning, (2013), 1319–1327. |
[33] |
Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, Proc. IEEE, 86 (1998), 2278–2324. https://doi.org/10.1109/5.726791 doi: 10.1109/5.726791
![]() |
[34] | M. E. Nilsback, A. Zisserman, A visual vocabulary for flower classification, in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), (2006), 1447–1454. https://doi.org/10.1109/CVPR.2006.42 |
1. | Julie Clutterbuck, Jiakun Liu, Preface to the Special Issue: Nonlinear PDEs and geometric analysis – Dedicated to Neil Trudinger on the occasion of his 80th birthday, 2023, 5, 2640-3501, 1, 10.3934/mine.2023095 |