Research article Special Issues

Lightweight tea bud recognition network integrating GhostNet and YOLOv5


  • Received: 12 July 2022 Revised: 19 August 2022 Accepted: 29 August 2022 Published: 05 September 2022
  • Aiming at the problems of low detection accuracy and slow speed caused by the complex background of tea sprouts and the small target size, this paper proposes a tea bud detection algorithm integrating GhostNet and YOLOv5. To reduce parameters, the GhostNet module is specially introduced to shorten the detection speed. A coordinated attention mechanism is then added to the backbone layer to enhance the feature extraction ability of the model. A bi-directional feature pyramid network (BiFPN) is used in the neck layer of feature fusion to increase the fusion between shallow and deep networks to improve the detection accuracy of small objects. Efficient intersection over union (EIOU) is used as a localization loss to improve the detection accuracy in the end. The experimental results show that the precision of GhostNet-YOLOv5 is 76.31%, which is 1.31, 4.83, and 3.59% higher than that of Faster RCNN, YOLOv5 and YOLOv5-Lite respectively. By comparing the actual detection effects of GhostNet-YOLOv5 and YOLOv5 algorithm on buds in different quantities, different shooting angles, and different illumination angles, and taking F1 score as the evaluation value, the results show that GhostNet-YOLOv5 is 7.84, 2.88, and 3.81% higher than YOLOv5 algorithm in these three different environments.

    Citation: Miaolong Cao, Hao Fu, Jiayi Zhu, Chenggang Cai. Lightweight tea bud recognition network integrating GhostNet and YOLOv5[J]. Mathematical Biosciences and Engineering, 2022, 19(12): 12897-12914. doi: 10.3934/mbe.2022602

    Related Papers:

  • Aiming at the problems of low detection accuracy and slow speed caused by the complex background of tea sprouts and the small target size, this paper proposes a tea bud detection algorithm integrating GhostNet and YOLOv5. To reduce parameters, the GhostNet module is specially introduced to shorten the detection speed. A coordinated attention mechanism is then added to the backbone layer to enhance the feature extraction ability of the model. A bi-directional feature pyramid network (BiFPN) is used in the neck layer of feature fusion to increase the fusion between shallow and deep networks to improve the detection accuracy of small objects. Efficient intersection over union (EIOU) is used as a localization loss to improve the detection accuracy in the end. The experimental results show that the precision of GhostNet-YOLOv5 is 76.31%, which is 1.31, 4.83, and 3.59% higher than that of Faster RCNN, YOLOv5 and YOLOv5-Lite respectively. By comparing the actual detection effects of GhostNet-YOLOv5 and YOLOv5 algorithm on buds in different quantities, different shooting angles, and different illumination angles, and taking F1 score as the evaluation value, the results show that GhostNet-YOLOv5 is 7.84, 2.88, and 3.81% higher than YOLOv5 algorithm in these three different environments.



    加载中


    [1] X. L. Yu, D. W. Sun, Y. He, Emerging techniques for determining the quality and safety of tea products: A review, Compr. Rev. Food Sci. Food Saf., 19 (2020), 2613–2638. https://doi.org/10.1111/1541-4337.12611 doi: 10.1111/1541-4337.12611
    [2] C. Chen, J. Lu, M. Zhou, J. Yi, M. Liao, Z. Gao, A YOLOv3-based computer vision system for identification of tea buds and the picking point, Comput. Electron. Agric., 198 (2022), 107116. https://doi.org/10.1016/j.compag.2022.107116 doi: 10.1016/j.compag.2022.107116
    [3] N. Gan, M. F. Sun, C. Y. Lu, M. H. Li, Y. J. Wang, Y. Song, et al., High-speed identification system for fresh tea leaves based on phenotypic characteristics utilizing an improved genetic algorithm, J. Sci. Food Agric., 2022 (2022). https://doi.org/10.1002/jsfa.12047 doi: 10.1002/jsfa.12047
    [4] Z. Huang, Y. Li, T. Zhao, P. Ying, Y. Fan, J. Li, Infusion port level detection for intravenous infusion based on Yolo v3 neural network, Math. Biosci. Eng., 18 (2021), 3491–3501. https://doi.org/10.3934/mbe.2021175 doi: 10.3934/mbe.2021175
    [5] M. Cao, J. Zhu, J. Zhang, S. Cao, M. Pang, Orthogonal optimization for effective classification of different tea leaves by a novel pressure stabilized inclined chamber classifier, J. Food Process Eng., 2022(2022), e14141. https://doi.org/10.1111/jfpe.14141 doi: 10.1111/jfpe.14141
    [6] S. Mukhopadhyay, M. Paul, R. Pal, D. De, Tea leaf disease detection using multi-objective image segmentation, Multimedia Tools Appl., 80 (2021), 753–771. https://doi.org/10.1007/s11042-020-09567-1 doi: 10.1007/s11042-020-09567-1
    [7] N. Yang, M. F. Yuan, P. Wang, R. B. Zhang, J. Sun, H. P. Mao, Tea diseases detection based on fast infrared thermal image processing technology, J. Sci. Food Agric., 99 (2019), 3459–3466. https://doi.org/10.1002/jsfa.9564 doi: 10.1002/jsfa.9564
    [8] G. M. K. B. Karunasena, H. Priyankara, Tea bud leaf identification by using machine learning and image processing techniques, Int. J. Sci. Eng. Res., 10 (2020). https://doi.org/10.14299/ijser.2020.08.02 doi: 10.14299/ijser.2020.08.02
    [9] L. Zhang, L. Zou, C. Y. Wu, J. N. Chen, H. P. Chen, Locating famous tea's picking point based on Shi-Tomasi algorithm, CMC-Comput. Mater. Continua, 69 (2021), 1109–1122. https://doi.org/10.32604/cmc.2021.016495 doi: 10.32604/cmc.2021.016495
    [10] R. Girshick, Fast R-CNN, in 2015 IEEE International Conference on Computer Vision (ICCV), (2015), 1440–1448. https://doi.org/10.1109/ICCV.2015.169
    [11] S. Ren, K. He, R. Girshick, J. Sun, Faster R-CNN: Towards real-time object detection with region proposal networks, IEEE Trans. Pattern Anal. Mach. Intell., 39 (2017), 1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031 doi: 10.1109/TPAMI.2016.2577031
    [12] J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: Unified, real-time object detection, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 779–788. https://doi.org/10.1109/CVPR.2016.91
    [13] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, et al., SSD: Single shot MultiBox detector, in Computer Vision—ECCV 2016, (2016), 21–37. https://doi.org/10.1007/978-3-319-46448-0_2
    [14] A. M. Roy, J. Bhaduri, Real-time growth stage detection model for high degree of occultation using DenseNet-fused YOLOv4, Comput. Electron. Agric., 193 (2022), 106694. https://doi.org/10.1016/j.compag.2022.106694 doi: 10.1016/j.compag.2022.106694
    [15] M. O. Lawal, Tomato detection based on modified YOLOv3 framework, Sci. Rep., 11 (2021). https://doi.org/10.1038/s41598-021-81216-5 doi: 10.1038/s41598-021-81216-5
    [16] A. M. Roy, R. Bose, J. Bhaduri, A fast accurate fine-grain object detection model based on YOLOv4 deep neural network, Neural Comput. Appl., 34 (2022), 3895–3921. https://doi.org/10.1007/s00521-021-06651-x doi: 10.1007/s00521-021-06651-x
    [17] H. L. Yang, L. Chen, Z. B. Ma, M. T. Chen, Y. Zhong, F. Deng, et al., Computer vision-based high-quality tea automatic plucking robot using Delta parallel manipulator, Comput. Electron. Agric., 181 (2021), 105946. https://doi.org/10.1016/j.compag.2020.105946 doi: 10.1016/j.compag.2020.105946
    [18] O. M. Lawal, Development of tomato detection model for robotic platform using deep learning, Multimedia Tools Appl., 80 (2021), 26751–26772. https://doi.org/10.1007/s11042-021-10933-w doi: 10.1007/s11042-021-10933-w
    [19] Y. T. Li, L. Y. He, J. M. Jia, J. N. Chen, J. Lyu, C. A. Y. Wu, High-efficiency tea shoot detection method via a compressed deep learning model, Int. J. Agric. Biol. Eng., 15 (2022), 159–166. https://doi.org/10.25165/j.ijabe.20221503.6896 doi: 10.25165/j.ijabe.20221503.6896
    [20] W. Xu, L. Zhao, J. Li, S. Shang, X. Ding, T. Wang, Detection and classification of tea buds based on deep learning, Comput. Electron. Agric., 192 (2022), 106547. https://doi.org/10.1016/j.compag.2021.106547 doi: 10.1016/j.compag.2021.106547
    [21] J. Redmon, A. Farhadi, YOLO9000: Better, faster, stronger, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 6517–6525. https://doi.org/10.1109/CVPR.2017.690
    [22] M. Tan, R. Pang, Q. V. Le, EfficientDet: Scalable and efficient object detection, in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 10778–10787. https://doi.org/10.1109/CVPR42600.2020.01079
    [23] X. Dong, S. Yan, C. Duan, A lightweight vehicles detection network model based on YOLOv5, Eng. Appl. Artif. Intell., 113 (2022), 104914. https://doi.org/10.1016/j.engappai.2022.104914 doi: 10.1016/j.engappai.2022.104914
    [24] A. Neubeck, L. V. Gool, Efficient non-maximum suppression, in 18th International Conference on Pattern Recognition (ICPR'06), (2006), 850–855. https://doi.org/10.1109/ICPR.2006.479
    [25] Z. Wang, L. Jin, S. Wang, H. Xu, Apple stem/calyx real-time recognition using YOLO-v5 algorithm for fruit automatic loading system, Postharvest Biol. Technol., 185 (2022), 111808. https://doi.org/10.1016/j.postharvbio.2021.111808 doi: 10.1016/j.postharvbio.2021.111808
    [26] M. P. Mathew, T. Y. Mahesh, Leaf-based disease detection in bell pepper plant using YOLO v5, Signal Image Video Process., 16 (2022), 841–847. https://doi.org/10.1007/s11760-021-02024-y doi: 10.1007/s11760-021-02024-y
    [27] K. Han, Y. Wang, Q. Tian, J. Guo, C. Xu, C. Xu, GhostNet: More features from cheap operations, in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 1577–1586. https://doi.org/10.1109/CVPR42600.2020.00165
    [28] A. Pandey, K. Jain, A robust deep attention dense convolutional neural network for plant leaf disease identification and classification from smart phone captured real world images, Ecol. Inf., 70 (2022), 101725. https://doi.org/10.1016/j.ecoinf.2022.101725 doi: 10.1016/j.ecoinf.2022.101725
    [29] S. Yi, J. Li, X. Liu, X. Yuan, CCAFFMNet: Dual-spectral semantic segmentation network with channel-coordinate attention feature fusion module, Neurocomputing, 482 (2022), 236–251. https://doi.org/10.1016/j.neucom.2021.11.056 doi: 10.1016/j.neucom.2021.11.056
    [30] D. Yuan, X. Shu, N. N. Fan, X. J. Chang, Q. Liu, Z. Y. He, Accurate bounding-box regression with distance-IoU loss for visual tracking, J. Visual Commun. Image Represent., 83 (2022), 103428. https://doi.org/10.1016/j.jvcir.2021.103428 doi: 10.1016/j.jvcir.2021.103428
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4951) PDF downloads(942) Cited by(38)

Article outline

Figures and Tables

Figures(11)  /  Tables(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog