Red imported fire ants (RIFA) are an alien invasive pest that can cause serious ecosystem damage. Timely detection, location and elimination of RIFA nests can further control the spread of RIFA. In order to accurately locate the RIFA nests, this paper proposes an improved deep learning method of YOLOv4. The specific methods were as follows: 1) We improved GhostBottleNeck (GBN) and replaced the original CSP block of YOLOv4, so as to compress the network scale and reduce the consumption of computing resources. 2) An Efficient Channel Attention (ECA) mechanism was introduced into GBN to enhance the feature extraction ability of the model. 3) We used Equalized Focal Loss to reduce the loss value of background noise. 4) We increased and improved the upsampling operation of YOLOv4 to enhance the understanding of multi-layer semantic features to the whole network. 5) CutMix was added in the model training process to improve the model's ability to identify occluded objects. The parameters of improved YOLOv4 were greatly reduced, and the abilities to locate and extract edge features were enhanced. Meanwhile, we used an unmanned aerial vehicle (UAV) to collect images of RIFA nests with different heights and scenes, and we made the RIFA nests (RIFAN) airspace dataset. On the RIFAN dataset, through qualitative analysis of the evaluation indicators, mean average precision (MAP) of the improved YOLOv4 model reaches 99.26%, which is 5.9% higher than the original algorithm. Moreover, compared with Faster R-CNN, SSD and other algorithms, improved YOLOv4 has achieved excellent results. Finally, we transplanted the model to the embedded device Raspberry Pi 4B and assembled it on the UAV, using the model's lightweight and high-efficiency features to achieve flexible and fast flight detection of RIFA nests.
Citation: Xiaotang Liu, Zheng Xing, Huanai Liu, Hongxing Peng, Huiming Xu, Jingqi Yuan, Zhiyu Gou. Combination of UAV and Raspberry Pi 4B: Airspace detection of red imported fire ant nests using an improved YOLOv4 model[J]. Mathematical Biosciences and Engineering, 2022, 19(12): 13582-13606. doi: 10.3934/mbe.2022634
[1] | Minlong Lin, Ke Tang . Selective further learning of hybrid ensemble for class imbalanced increment learning. Big Data and Information Analytics, 2017, 2(1): 1-21. doi: 10.3934/bdia.2017005 |
[2] | Subrata Dasgupta . Disentangling data, information and knowledge. Big Data and Information Analytics, 2016, 1(4): 377-390. doi: 10.3934/bdia.2016016 |
[3] | Qinglei Zhang, Wenying Feng . Detecting Coalition Attacks in Online Advertising: A hybrid data mining approach. Big Data and Information Analytics, 2016, 1(2): 227-245. doi: 10.3934/bdia.2016006 |
[4] | Tieliang Gong, Qian Zhao, Deyu Meng, Zongben Xu . Why Curriculum Learning & Self-paced Learning Work in Big/Noisy Data: A Theoretical Perspective. Big Data and Information Analytics, 2016, 1(1): 111-127. doi: 10.3934/bdia.2016.1.111 |
[5] | Xin Yun, Myung Hwan Chun . The impact of personalized recommendation on purchase intention under the background of big data. Big Data and Information Analytics, 2024, 8(0): 80-108. doi: 10.3934/bdia.2024005 |
[6] | Pankaj Sharma, David Baglee, Jaime Campos, Erkki Jantunen . Big data collection and analysis for manufacturing organisations. Big Data and Information Analytics, 2017, 2(2): 127-139. doi: 10.3934/bdia.2017002 |
[7] | Zhen Mei . Manifold Data Mining Helps Businesses Grow More Effectively. Big Data and Information Analytics, 2016, 1(2): 275-276. doi: 10.3934/bdia.2016009 |
[8] | Ricky Fok, Agnieszka Lasek, Jiye Li, Aijun An . Modeling daily guest count prediction. Big Data and Information Analytics, 2016, 1(4): 299-308. doi: 10.3934/bdia.2016012 |
[9] | M Supriya, AJ Deepa . Machine learning approach on healthcare big data: a review. Big Data and Information Analytics, 2020, 5(1): 58-75. doi: 10.3934/bdia.2020005 |
[10] | Sunmoo Yoon, Maria Patrao, Debbie Schauer, Jose Gutierrez . Prediction Models for Burden of Caregivers Applying Data Mining Techniques. Big Data and Information Analytics, 2017, 2(3): 209-217. doi: 10.3934/bdia.2017014 |
Red imported fire ants (RIFA) are an alien invasive pest that can cause serious ecosystem damage. Timely detection, location and elimination of RIFA nests can further control the spread of RIFA. In order to accurately locate the RIFA nests, this paper proposes an improved deep learning method of YOLOv4. The specific methods were as follows: 1) We improved GhostBottleNeck (GBN) and replaced the original CSP block of YOLOv4, so as to compress the network scale and reduce the consumption of computing resources. 2) An Efficient Channel Attention (ECA) mechanism was introduced into GBN to enhance the feature extraction ability of the model. 3) We used Equalized Focal Loss to reduce the loss value of background noise. 4) We increased and improved the upsampling operation of YOLOv4 to enhance the understanding of multi-layer semantic features to the whole network. 5) CutMix was added in the model training process to improve the model's ability to identify occluded objects. The parameters of improved YOLOv4 were greatly reduced, and the abilities to locate and extract edge features were enhanced. Meanwhile, we used an unmanned aerial vehicle (UAV) to collect images of RIFA nests with different heights and scenes, and we made the RIFA nests (RIFAN) airspace dataset. On the RIFAN dataset, through qualitative analysis of the evaluation indicators, mean average precision (MAP) of the improved YOLOv4 model reaches 99.26%, which is 5.9% higher than the original algorithm. Moreover, compared with Faster R-CNN, SSD and other algorithms, improved YOLOv4 has achieved excellent results. Finally, we transplanted the model to the embedded device Raspberry Pi 4B and assembled it on the UAV, using the model's lightweight and high-efficiency features to achieve flexible and fast flight detection of RIFA nests.
For a continuous risk outcome
Given fixed effects
In this paper, we assume that the risk outcome
y=Φ(a0+a1x1+⋯+akxk+bs), | (1.1) |
where
Given random effect model (1.1), the expected value
We introduce a family of interval distributions based on variable transformations. Probability densities for these distributions are provided (Proposition 2.1). Parameters of model (1.1) can then be estimated by maximum likelihood approaches assuming an interval distribution. In some cases, these parameters get an analytical solution without the needs for a model fitting (Proposition 4.1). We call a model with a random effect, where parameters are estimated by maximum likelihood assuming an interval distribution, an interval distribution model.
In its simplest form, the interval distribution model
The paper is organized as follows: in section 2, we introduce a family of interval distributions. A measure for tail fatness is defined. In section 3, we show examples of interval distributions and investigate their tail behaviours. We propose in section 4 an algorithm for estimating the parameters in model (1.1).
Interval distributions introduced in this section are defined for a risk outcome over a finite open interval
Let
Let
Φ:D→(c0,c1) | (2.1) |
be a transformation with continuous and positive derivatives
Given a continuous random variable
y=Φ(a+bs), | (2.2) |
where we assume that the range of variable
Proposition 2.1. Given
g(y,a,b)=U1/(bU2) | (2.3) |
G(y,a,b)=F[Φ−1(y)−ab]. | (2.4) |
where
U1=f{[Φ−1(y)−a]/b},U2=ϕ[Φ−1(y)] | (2.5) |
Proof. A proof for the case when
G(y,a,b)=P[Φ(a+bs)≤y] |
=P{s≤[Φ−1(y)−a]/b} |
=F{[Φ−1(y)−a]/b}. |
By chain rule and the relationship
∂Φ−1(y)∂y=1ϕ[Φ−1(y)]. | (2.6) |
Taking the derivative of
∂G(y,a,b)∂y=f{[Φ−1(y)−a]/b}bϕ[Φ−1(y)]=U1bU2. |
One can explore into these interval distributions for their shapes, including skewness and modality. For stress testing purposes, we are more interested in tail risk behaviours for these distributions.
Recall that, for a variable X over (−
For a risk outcome over a finite interval
We say that an interval distribution has a fat right tail if the limit
Given
Recall that, for a Beta distribution with parameters
Next, because the derivative of
{z = \mathrm{\Phi }}^{-1}\left(y\right) | (2.7) |
Then
Lemma 2.2. Given
(ⅰ)
(ⅱ) If
(ⅲ) If
Proof. The first statement follows from the relationship
{\left[g\left(y, a, b\right){\left({y}_{1}-y\right)}^{\beta }\right]}^{-1/\beta } = \frac{{\left[g\left(y, a, b\right)\right]}^{-1/\beta }}{{y}_{1}-y} = \frac{{\left[g\left(\mathrm{\Phi }\left(\mathrm{z}\right), a, b\right)\right]}^{-1/\beta }}{{y}_{1}-\mathrm{\Phi }\left(\mathrm{z}\right)}. | (2.8) |
By L’Hospital’s rule and taking the derivatives of the numerator and the denominator of (2.8) with respect to
For tail convexity, we say that the right tail of an interval distribution is convex if
Again, write
h\left(z, a, b\right) = \mathrm{log}\left[g\left(\mathrm{\Phi }\left(z\right), a, b\right)\right], | (2.9) |
where
g\left(y, a, b\right) = \mathrm{exp}\left[h\left(z, a, b\right)\right]. | (2.10) |
By (2.9), (2.10), using (2.6) and the relationship
{g}_{y}^{'} = {[h}_{z}^{'}\left(z\right)/{\rm{ \mathsf{ ϕ} }}\left(\mathrm{z}\right)]\mathrm{e}\mathrm{x}\mathrm{p}[h({\mathrm{\Phi }}^{-1}\left(y\right), a, b)], \\ {g}_{yy}^{''} = \left[\frac{{h}_{zz}^{''}\left(z\right)}{{{\rm{ \mathsf{ ϕ} }}}^{2}\left(\mathrm{z}\right)}-\frac{{h}_{z}^{'}\left(z\right){{\rm{ \mathsf{ ϕ} }}}_{\mathrm{z}}^{'}\left(z\right)}{{{\rm{ \mathsf{ ϕ} }}}^{3}\left(\mathrm{z}\right)}+\frac{{h}_{\mathrm{z}}^{\mathrm{'}}\left(\mathrm{z}\right){h}_{\mathrm{z}}^{\mathrm{'}}\left(\mathrm{z}\right)}{{{\rm{ \mathsf{ ϕ} }}}^{2}\left(\mathrm{z}\right)}\right]\mathrm{e}\mathrm{x}\mathrm{p}\left[h\right({\mathrm{\Phi }}^{-1}\left(y\right), a, b) ]. | (2.11) |
The following lemma is useful for checking tail convexity, it follows from (2.11).
Lemma 2.3. Suppose
In this section, we focus on the case where
One can explore into a wide list of densities with different choices for
A.
B.
C.
D.D.
Densities for cases A, B, C, and D are given respectively in (3.3) (section 3.1), (A.1), (A.3), and (A5) (Appendix A). Tail behaviour study is summarized in Propositions 3.3, 3.5, and Remark 3.6. Sketches of density plots are provided in Appendix B for distributions A, B, and C.
Using the notations of section 2, we have
By (2.5), we have
\mathrm{log}\left(\frac{{U}_{1}}{{U}_{2}}\right) = \frac{{-z}^{2}+2az-{a}^{2}+{b}^{2}{z}^{2}}{2{b}^{2}} | (3.1) |
= \frac{{-\left(1-{b}^{2}\right)\left(z-\frac{a}{1-{b}^{2}}\right)}^{2}+\frac{{b}^{2}}{1-{b}^{2}}{a}^{2}}{2{b}^{2}}\text{.} | (3.2) |
Therefore, we have
g\left(\mathrm{y}, a, b\right) = \frac{1}{b}\mathrm{e}\mathrm{x}\mathrm{p}\left\{\frac{{-\left(1-{b}^{2}\right)\left(z-\frac{a}{1-{b}^{2}}\right)}^{2}+\frac{{b}^{2}}{1-{b}^{2}}{a}^{2}}{2{b}^{2}}\right\}\text{.} | (3.3) |
Again, using the notations of section 2, we have
g\left(y, p, \rho \right) = \sqrt{\frac{1-\rho }{\rho }}\mathrm{e}\mathrm{x}\mathrm{p}\{-\frac{1}{2\rho }{\left[{\sqrt{1-\rho }{\mathrm{\Phi }}^{-1}\left(y\right)-\mathrm{\Phi }}^{-1}\left(p\right)\right]}^{2}+\frac{1}{2}{\left[{\mathrm{\Phi }}^{-1}\left(y\right)\right]}^{2}\}\text{, } | (3.4) |
where
Proposition 3.1. Density (3.3) is equivalent to (3.4) under the relationships:
a = \frac{{\Phi }^{-1}\left(p\right)}{\sqrt{1-\rho }} \ \ \text{and}\ \ b = \sqrt{\frac{\rho }{1-\rho }}. | (3.5) |
Proof. A similar proof can be found in [19]. By (3.4), we have
g\left(y, p, \rho \right) = \sqrt{\frac{1-\rho }{\rho }}\mathrm{e}\mathrm{x}\mathrm{p}\{-\frac{1-\rho }{2\rho }{\left[{{\mathrm{\Phi }}^{-1}\left(y\right)-\mathrm{\Phi }}^{-1}\left(p\right)/\sqrt{1-\rho }\right]}^{2}+\frac{1}{2}{\left[{\mathrm{\Phi }}^{-1}\left(y\right)\right]}^{2}\} |
= \frac{1}{b}\mathrm{exp}\left\{-\frac{1}{2}{\left[\frac{{\Phi }^{-1}\left(y\right)-a}{b}\right]}^{2}\right\}\mathrm{e}\mathrm{x}\mathrm{p}\left\{\frac{1}{2}{\left[{\mathrm{\Phi }}^{-1}\left(y\right)\right]}^{2}\right\} |
= {U}_{1}/{(bU}_{2}) = g(y, a, b)\text{.} |
The following relationships are implied by (3.5):
\rho = \frac{{b}^{2}}{1{+b}^{2}}, | (3.6) |
a = {\Phi }^{-1}\left(p\right)\sqrt{1+{b}^{2}}\text{.} | (3.7) |
Remark 3.2. The mode of
\frac{\sqrt{1-\rho }}{1-2\rho }{\mathrm{\Phi }}^{-1}\left(p\right) = \frac{\sqrt{1+{b}^{2}}}{1-{b}^{2}}{\mathrm{\Phi }}^{-1}\left(p\right) = \frac{a}{1-{b}^{2}}. |
This means
Proposition 3.3. The following statements hold for
(ⅰ)
(ⅱ)
(ⅲ) If
Proof. For statement (ⅰ), we have
Consider statement (ⅱ). First by (3.3), if
{\left[g\left(\mathrm{\Phi }\left(\mathrm{z}\right), a, b\right)\right]}^{-1/\beta } = {b}^{1/\beta }\mathrm{e}\mathrm{x}\mathrm{p}(-\frac{{\left({b}^{2}-1\right)z}^{2}+2az-{a}^{2}}{2\beta {b}^{2}}) | (3.8) |
By taking the derivative of (3.8) with respect to
-\left\{\partial {\left[g\left(\mathrm{\Phi }\left(\mathrm{z}\right), a, b\right)\right]}^{-\frac{1}{\beta }}/\partial z\right\}/{\rm{ \mathsf{ ϕ} }}\left(\mathrm{z}\right) = \sqrt{2\pi }{b}^{\frac{1}{\beta }}\frac{\left({b}^{2}-1\right)z+a}{\beta {b}^{2}}\mathrm{e}\mathrm{x}\mathrm{p}(-\frac{{\left({b}^{2}-1\right)z}^{2}+2az-{a}^{2}}{2\beta {b}^{2}}+\frac{{z}^{2}}{2})\text{.} | (3.9) |
Thus
\left\{\partial {\left[g\left(\mathrm{\Phi }\left(\mathrm{z}\right), a, b\right)\right]}^{-\frac{1}{\beta }}/\partial z\right\}/{\rm{ \mathsf{ ϕ} }}\left(\mathrm{z}\right) = -\sqrt{2\pi }{b}^{\frac{1}{\beta }}\frac{\left({b}^{2}-1\right)z+a}{\beta {b}^{2}}\mathrm{e}\mathrm{x}\mathrm{p}(-\frac{{\left({b}^{2}-1\right)z}^{2}+2az-{a}^{2}}{2\beta {b}^{2}}+\frac{{z}^{2}}{2})\text{.} | (3.10) |
Thus
For statement (ⅲ), we use Lemma 2.3. By (2.9) and using (3.2), we have
h\left(z, a, b\right) = \mathrm{log}\left(\frac{{U}_{1}}{{bU}_{2}}\right) = \frac{{-\left(1-{b}^{2}\right)\left(z-\frac{a}{1-{b}^{2}}\right)}^{2}+\frac{{b}^{2}}{1-{b}^{2}}{a}^{2}}{2{b}^{2}}-\mathrm{l}\mathrm{o}\mathrm{g}\left(b\right)\text{.} |
When
Remark 3.4. Assume
li{m}_{z⤍+\infty }-\left\{{\partial \left[g\left(\mathrm{\Phi }\left(\mathrm{z}\right), a, b\right)\right]}^{-\frac{1}{\beta }}/\partial z\right\}/{\rm{ \mathsf{ ϕ} }}\left(\mathrm{z}\right) |
is
For these distributions, we again focus on their tail behaviours. A proof for the next proposition can be found in Appendix A.
Proposition 3.5. The following statements hold:
(a) Density
(b) The tailed index of
Remark 3.6. Among distributions A, B, C, and Beta distribution, distribution B gets the highest tailed index of 1, independent of the choices of
In this section, we assume that
First, we consider a simple case, where risk outcome
y = \mathrm{\Phi }\left(v+bs\right), | (4.1) |
where
Given a sample
LL = \sum _{i = 1}^{n}\left\{\mathrm{log}f\left(\frac{{z}_{i}-{v}_{i}}{b}\right)-\mathrm{l}\mathrm{o}\mathrm{g}{\rm{ \mathsf{ ϕ} }}\left({z}_{i}\right)-logb\right\}\text{, } | (4.2) |
where
Recall that the least squares estimators of
SS = \sum _{i = 1}^{n}{({z}_{i}-{v}_{i})}^{2} | (4.3) |
has a closed form solution given by the transpose of
{\rm{X}} = \left\lceil {\begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} {1\;\;{x_{11}} \ldots {x_{k1}}}\\ {1\;\;{x_{12}} \ldots {x_{k2}}} \end{array}}\\ \ldots \\ {1\;\;{x_{1n}} \ldots {x_{kn}}} \end{array}} \right\rceil , {\rm{Z}} = \left\lceil {\begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} {{z_1}}\\ {{z_2}} \end{array}}\\ \ldots \\ {{z_n}} \end{array}} \right\rceil . |
The next proposition shows there exists an analytical solution for the parameters of model (4.1).
Proposition 4.1. Given a sample
Proof. Dropping off the constant term from (4.2) and noting
LL = -\frac{1}{2{b}^{2}}\sum _{i = 1}^{n}{({z}_{i}-{v}_{i})}^{2}-nlogb, | (4.4) |
Hence the maximum likelihood estimates
Next, we consider the general case of model (1.1), where the risk outcome
y = \mathrm{\Phi }[v+ws], | (4.5) |
where parameter
(a)
(b)
Given a sample
LL = \sum _{i = 1}^{n}-{\frac{1}{2}[\left({z}_{i}-{v}_{i}\right)}^{2}/{w}_{i}^{2}-{u}_{i}], | (4.6) |
LL = \sum _{i = 1}^{n}\{-\left({z}_{i}-{v}_{i}\right)/{w}_{\mathrm{i}}-2\mathrm{log}[1+\mathrm{e}\mathrm{x}\mathrm{p}[-({z}_{i}-{v}_{i})/{w}_{i}]-{u}_{i}\}, | (4.7) |
Recall that a function is log-concave if its logarithm is concave. If a function is concave, a local maximum is a global maximum, and the function is unimodal. This property is useful for searching maximum likelihood estimates.
Proposition 4.2. The functions (4.6) and (4.7) are concave as a function of
Proof. It is well-known that, if
For (4.7), the linear part
In general, parameters
Algorithm 4.3. Follow the steps below to estimate parameters of model (4.5):
(a) Given
(b) Given
(c) Iterate (a) and (b) until a convergence is reached.
With the interval distributions introduced in this paper, models with a random effect can be fitted for a continuous risk outcome by maximum likelihood approaches assuming an interval distribution. These models provide an alternative regression tool to the Beta regression model and fraction response model, and a tool for tail risk assessment as well.
Authors are very grateful to the third reviewer for many constructive comments. The first author is grateful to Biao Wu for many valuable conversations. Thanks also go to Clovis Sukam for his critical reading for the manuscript.
We would like to thank you for following the instructions above very closely in advance. It will definitely save us lot of time and expedite the process of your paper's publication.
The views expressed in this article are not necessarily those of Royal Bank of Canada and Scotiabank or any of their affiliates. Please direct any comments to Bill Huajian Yang at h_y02@yahoo.ca.
[1] |
L. Lv, Y. He, J. Liu, X. Liu, S. Vinson, Invasion, spread, biology and harm of Solenopsis invicta Buren (in Chinese), Cant. Agric. Sci., 5 (2006), 3-11. https://doi.org/10.16768/j.issn.1004-874x.2006.05.001 doi: 10.16768/j.issn.1004-874x.2006.05.001
![]() |
[2] | W. B. Wu, L. Zhi, T. S. Hong, et al., Detection of Solenopsis invicta nest using spectrum analysis technology, T. Chin. Soc. Agric. Eng. (Transactions of the CSAE), 29 (2013), 175-182. |
[3] | R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, in 2014 IEEE Conference on Computer Vision and Pattern Recognition, (2014), 580-587. https://doi.org/10.1109/CVPR.2014.81 |
[4] | R. Girshick, Fast R-CNN, in 2015 IEEE International Conference on Computer Vision (ICCV), (2015), 1440-1448. https://doi.org/10.1109/ICCV.2015.169 |
[5] | S. Ren, K. He, R. Girshick, J. Sun, Faster R-CNN: towards real-time object detection with region proposal networks, in IEEE Transactions on Pattern Analysis and Machine Intelligence, 39 (2016), 1137-1149. https://doi.org/10.1109/TPAMI.2016.2577031 |
[6] | K. He, G. Gkioxari, P. Dollar, R. Girshick, Mask R-CNN, in IEEE Transactions on Pattern Analysis and Machine Intelligence, 42 (2017), 386-397. https://doi.org/10.48550/arXiv.1703.06870 |
[7] | J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: unified, real-time object detection, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 779-788. https://doi.org/10.1109/CVPR.2016.91 |
[8] | W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, SSD: single shot multibox detector, in European Conference on Computer Vision, (2016), 21-37. https://doi.org/10.1007/978-3-319-46448-0_2 |
[9] |
A. Roy, J. Bhaduri, Real-time growth stage detection model for high degree of occultation using DenseNet-fused YOLOv4, Comput. Electron. Agric., 193 (2022), 106694. https://doi.org/10.1016/j.compag.2022.106694 doi: 10.1016/j.compag.2022.106694
![]() |
[10] |
A. Roy, R. Bose, J. Bhaduri, A fast accurate fine-grain object detection model based on YOLOv4 deep neural network, Neural Comput. Appl., 34 (2022), 3895-3921. https://doi.org/10.1007/s00521-021-06651-x doi: 10.1007/s00521-021-06651-x
![]() |
[11] |
M. O. Lawal, Tomato detection based on modified YOLOv3 framework, Sci. Rep., 11 (2021), 1447. https://doi.org/10.1038/s41598-021-81216-5 doi: 10.1038/s41598-021-81216-5
![]() |
[12] |
D. Wu, S. Lv, M. Jiang, H. Song, Using channel pruning-based YOLO v4 deep learning algorithm for the real-time and accurate detection of apple flowers in natural environments, Comput. Electron. Agric, 178 (2020), 105742. https://doi.org/10.1016/j.compag.2020.105742 doi: 10.1016/j.compag.2020.105742
![]() |
[13] |
O. Agbo-Ajala, S. Viriri., A lightweight convolutional neural network for real and apparent age estimation in unconstrained face images, IEEE Access, 8 (2020), 162800-162808. https://doi.org/10.1109/ACCESS.2020.3022039 doi: 10.1109/ACCESS.2020.3022039
![]() |
[14] | A. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, et al., MobileNets: efficient convolutional neural networks for mobile vision applications, Comput. Sci., (2017), 1-9. https://doi.org/10.48550/arXiv.1704.04861 |
[15] | X. Zang, X. Zhou, M. Lin, J. Sun, ShuffleNet: an extremely efficient convolutional neural network for mobile devices, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 6848-6856. https://doi.org/10.1109/CVPR.2018.00716 |
[16] | M. Tan, Q. Le, EfficientNet: rethinking model scaling for convolutional neural networks, in Proceedings of the 36th International Conference on Machine Learning, (2019), 6105-6114. https://doi.org/10.48550/arXiv.1905.11946 |
[17] |
F. Zhang, Z. Chen, R. Bao, C. Zhang, Z. Wang, Recognition of dense cherry tomatoes based on improved YOLOv4-LITE lightweight neural network (in Chinese), Trans. Chin. Soc. Agric. Eng., 37 (2021), 270-278. https://doi.org/10.11975/j.issn.1002-6819.2021.16.033 doi: 10.11975/j.issn.1002-6819.2021.16.033
![]() |
[18] | M. Togacar, B. Ergen, Classification of cloud images by using super resolution, semantic segmentation approaches and binary sailfish optimization method with deep learning model, Comput. Electron. Agric., 193 (2022), 106724. https://doi.org/10.1016/j.compag.2022.106724 |
[19] | X. Wang, X. Zhuang, W. Zhang, Y. Chen, Y. Li, Lightweight real-time object detection model for UAV platform, in 2021 International Conference on Computer Communication and Artificial Intelligence (CCAI), (2021), 20-24. https://doi.org/10.1109/CCAI50917.2021.9447518 |
[20] |
L. Yu, E. Yang, C. Luo, P. Ren, AMCD: an accurate deep learning-based metallic corrosion detector for MAV-based real-time visual inspection, J. Ambient Intell. Hum. Comput., 2021 (2021), 1-12. https://doi.org/10.1007/s12652-021-03580-4 doi: 10.1007/s12652-021-03580-4
![]() |
[21] |
Y. Cheng, T. Zheng, Binocular visual obstacle avoidance of UAV based on deep learning (in Chinese), Elect. Opt. Control, 10 (2021), 31-35. https://doi.org/10.3969/j.issn.1671-637X.2021.10.007 doi: 10.3969/j.issn.1671-637X.2021.10.007
![]() |
[22] |
V. Gonzalez-Huitron, j. León-Borges, A. E. Rodriguez-Mata, L. Amabilis-Sosa, B. Ramírez-Pereda, H. Rodriguez, Disease detection in tomato leaves via CNN with lightweight architectures implemented in raspberry Pi 4, Comput. Elect. Agri., 181 (2021), 105951. https://doi.org/10.1016/j.compag.2020.105951 doi: 10.1016/j.compag.2020.105951
![]() |
[23] | S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, Y. Yoo, Cutmix: regularization strategy to train strong classifiers with localizable features, in Proceedings of the IEEE International Conference on Computer Vision, (2019), 6023-6032. https://doi.org/10.1109/ICCV.2019.00612 |
[24] |
Y. Ma, X. Cai, F. Sun, Towards no-reference image quality assessment based on multi-scale convolutional neural network, Comput. Model. Eng. Sci., 123 (2020), 201-216. https://doi.org/10.32604/cmes.2020.07867 doi: 10.32604/cmes.2020.07867
![]() |
[25] | A. Bochkovskiy, C. Y. Wang, H. Y. M. Liao, YOLOv4: Optimal speed and accuracy of object detection, preprint arXiv: 10934, 2004. |
[26] | Y. Wu, K. He, Group normalization, in European Conference on Computer Vision, 128 (2020), 742-755. https://doi.org/10.1007/978-3-030-01261-8_1 |
[27] | K. Han, Y. Wang, Q. Tian, J. Guo, C. Xu, C. Xu, GhostNet: more features from cheap operations, in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 1577-1586. https://doi.org/10.1109/CVPR42600.2020.00165 |
[28] | M. Wang, S. Jiang, J. Wu, C. Wang, Research on image defogging algorithm based on improved generative antagonistic network, J. Changchun Univ. Sci. Technol., 44 (2021), 93-99. |
[29] | Q. Wang, B. Wu, P. Zhu, P. Li, W. Zuo, Q. Hu, ECA-Net: efficient channel attention for deep convolutional neural networks, in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. https://doi.org/10.1109/CVPR42600.2020.01155 |
[30] | B. Li, Y. Q. Yao, J. Tan, G. Zhang, F. Yu, J. Lu, Equalized focal loss for dense long-tailed object detection, preprint, arXiv: 2201.02593. |