Research article

An efficient detection model based on improved YOLOv5s for abnormal surface features of fish


  • Received: 12 November 2023 Revised: 11 December 2023 Accepted: 13 December 2023 Published: 02 January 2024
  • Detecting abnormal surface features is an important method for identifying abnormal fish. However, existing methods face challenges in excessive subjectivity, limited accuracy, and poor real-time performance. To solve these challenges, a real-time and accurate detection model of abnormal surface features of in-water fish is proposed, based on improved YOLOv5s. The specific enhancements include: 1) We optimize the complete intersection over union and non-maximum suppression through the normalized Gaussian Wasserstein distance metric to improve the model's ability to detect tiny targets. 2) We design the DenseOne module to enhance the reusability of abnormal surface features, and introduce MobileViTv2 to improve detection speed, which are integrated into the feature extraction network. 3) According to the ACmix principle, we fuse the omni-dimensional dynamic convolution and convolutional block attention module to solve the challenge of extracting deep features within complex backgrounds. We carried out comparative experiments on 160 validation sets of in-water abnormal fish, achieving precision, recall, mAP50, mAP50:95 and frames per second (FPS) of 99.5, 99.1, 99.1, 73.9% and 88 FPS, respectively. The results of our model surpass the baseline by 1.4, 1.2, 3.2, 8.2% and 1 FPS. Moreover, the improved model outperforms other state-of-the-art models regarding comprehensive evaluation indexes.

    Citation: Zheng Zhang, Xiang Lu, Shouqi Cao. An efficient detection model based on improved YOLOv5s for abnormal surface features of fish[J]. Mathematical Biosciences and Engineering, 2024, 21(2): 1765-1790. doi: 10.3934/mbe.2024076

    Related Papers:

  • Detecting abnormal surface features is an important method for identifying abnormal fish. However, existing methods face challenges in excessive subjectivity, limited accuracy, and poor real-time performance. To solve these challenges, a real-time and accurate detection model of abnormal surface features of in-water fish is proposed, based on improved YOLOv5s. The specific enhancements include: 1) We optimize the complete intersection over union and non-maximum suppression through the normalized Gaussian Wasserstein distance metric to improve the model's ability to detect tiny targets. 2) We design the DenseOne module to enhance the reusability of abnormal surface features, and introduce MobileViTv2 to improve detection speed, which are integrated into the feature extraction network. 3) According to the ACmix principle, we fuse the omni-dimensional dynamic convolution and convolutional block attention module to solve the challenge of extracting deep features within complex backgrounds. We carried out comparative experiments on 160 validation sets of in-water abnormal fish, achieving precision, recall, mAP50, mAP50:95 and frames per second (FPS) of 99.5, 99.1, 99.1, 73.9% and 88 FPS, respectively. The results of our model surpass the baseline by 1.4, 1.2, 3.2, 8.2% and 1 FPS. Moreover, the improved model outperforms other state-of-the-art models regarding comprehensive evaluation indexes.



    加载中


    [1] E. A. O'Neil, N. J. Rowan, A. M. Fogarty, Novel use of the alga Pseudokirchneriella subcapitata, as an early-warning indicator to identify climate change ambiguity in aquatic environments using freshwater finfish farming as a case study, Sci. Total Environ., 692 (2019), 209–218. https://doi.org/10.1016/j.scitotenv.2019.07.243 doi: 10.1016/j.scitotenv.2019.07.243
    [2] Y. Wei, Q. Wei, D. An, Intelligent monitoring and control technologies of open sea cage culture: A review, Comput. Electron. Agric., 169 (2020), 105119. https://doi.org/10.1016/j.compag.2019.105119 doi: 10.1016/j.compag.2019.105119
    [3] S. Zhao, S. Zhang, J. Liu, H. Wang, D. Li, R. Zhao, Application of machine learning in intelligent fish aquaculture: A review, Aquaculture, 540 (2021), 736724. https://doi.org/10.1016/j.aquaculture.2021.736724 doi: 10.1016/j.aquaculture.2021.736724
    [4] C. Liu, Z. Wang, Y. Li, Z. Zhang, J. Li, C. Xu, et al., Research progress of computer vision technology in abnormal fish detection, Aquacultural Eng., 103 (2023), 102350. https://doi.org/10.1016/j.aquaeng.2023.102350 doi: 10.1016/j.aquaeng.2023.102350
    [5] Y. Zhou, J. Yang, A. Tolba, F. Alqahtani, X. Qi, Y. Shen, A data-driven intelligent management scheme for digital industrial aquaculture based on multi-object deep neural network, Math. Biosci. Eng., 20 (2023), 10428–10443. https://doi.org/10.3934/mbe.2023458 doi: 10.3934/mbe.2023458
    [6] L. Zhang, B. Li, X. Sun, Q. Hong, Q. L. Duan, Intelligent fish feeding based on machine vision: A review, Biosyst. Eng., 231 (2023), 133–164. https://doi.org/10.1016/j.biosystemseng.2023.05.010 doi: 10.1016/j.biosystemseng.2023.05.010
    [7] B. Zion, The use of computer vision technologies in aquaculture-A review, Comput. Electron. Agric., 88 (2012), 125–132. https://doi.org/10.1016/j.compag.2012.07.010 doi: 10.1016/j.compag.2012.07.010
    [8] M. L. Yasruddin, M. A. H. Ismail, Z. Husin, W. K. Tan, Feasibility study of fish disease detection using computer vision and deep convolutional neural network (DCNN) algorithm, in 2022 IEEE 18th International Colloquium on Signal Processing & Applications (CSPA), (2022), 272–276. https://doi.org/10.1109/CSPA55076.2022.9782020
    [9] A. Ashraf, A. Atia, Comparative study between transfer learning models to detect shrimp diseases, in 2021 16th International Conference on Computer Engineering and Systems (ICCES), (2021), 1–6. https://doi.org/10.1109/ICCES54031.2021.9686116
    [10] Q. Wang, C. Qian, P. Nie, M. Ye, Rapid detection of Penaeus vannamei diseases via an improved LeNet, Aquacultural Eng., 100 (2023), 102296. https://doi.org/10.1016/j.aquaeng.2022.102296 doi: 10.1016/j.aquaeng.2022.102296
    [11] J. C. Chen, T. Chen, H. Wang, P. Chang, Underwater abnormal classification system based on deep learning: A case study on aquaculture fish farm in Taiwan, 99 (2022), 102290. https://doi.org/10.1016/j.aquaeng.2022.102290
    [12] A. Gupta, E. Bringsdal, K. M. Knausgard, M. Goodwin, Accurate wound and lice detection in atlantic salmon fish using a convolutional neural network, Fishes, 7 (2022), 345. https://doi.org/10.3390/fishes7060345 doi: 10.3390/fishes7060345
    [13] J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: Unified, real-time object detection, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 779–778. https://doi.org/10.1109/CVPR.2016.91
    [14] C. Chen, G. Yuan, H. Zhou, Y. Ma, Improved YOLOv5s model for key components detection of power transmission lines, Math. Biosci. Eng., 20 (2023), 7738–7760. https://doi.org/10.3934/mbe.2023334 doi: 10.3934/mbe.2023334
    [15] Y. Ma, G. Yuan, K. Yue, H. Zhou, CJS-YOLOv5n: A high-performance detection model for cigarette appearance defects, Math. Biosci. Eng., 20 (2023), 17886–17904. https://doi.org/10.3934/mbe.2023795 doi: 10.3934/mbe.2023795
    [16] A. Bochkovskiy, C. Wang, H. M. Liao, YOLOv4: Optimal speed and accuracy of object detection, preprint, arXiv: 2004.10934.
    [17] C. Li, L. Li, H. Jiang, K. Weng. Y. Geng, L. Li, et al., YOLOv6: A single-stage object detection framework for industrial applications, preprint, arXiv: 2209.02976.
    [18] C. Wang, A. Bochkovskiy, H. M. Liao, Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors, in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2023), 7464–7475. https://doi.org/10.1109/CVPR52729.2023.00721
    [19] G. Yu, J. Zhang, A. Chen, R. Wan, Detection and identification of fish skin health status referring to four common diseases based on improved YOLOv4 model, Fishes, 8 (2023), 186. https://doi.org/10.3390/fishes8040186 doi: 10.3390/fishes8040186
    [20] Z. Wang, H. Liu, G. Zhang, X. Yang, L. Wen, W. Zhao, Diseased fish detection in the underwater environment using an improved YOLOV5 network for intensive aquaculture, Fishes, 8 (2023), 169. https://doi.org/10.3390/fishes8030169 doi: 10.3390/fishes8030169
    [21] E. Prasetyo, N. Suciati, C. Fatichah, Yolov4-tiny with wing convolution layer for detecting fish body part, Comput. Electron. Agric., 198 (2022), 107023. https://doi.org/10.1016/j.compag.2022.107023 doi: 10.1016/j.compag.2022.107023
    [22] S. Zhao, S. Zhang, J. Lu, H. Wang, Y. Feng, C. Shi, et al., A lightweight dead fish detection method based on deformable convolution and YOLOV4, Comput. Electron. Agric., 198 (2022), 107098. https://doi.org/10.1016/j.compag.2022.107098 doi: 10.1016/j.compag.2022.107098
    [23] X. Li, Y. Hao, P. Zhang, M. Akhter, D. Li, A novel automatic detection method for abnormal behavior of single fish using image fusion, Comput. Electron. Agric., 203 (2022), 107435. https://doi.org/10.1016/j.compag.2022.107435 doi: 10.1016/j.compag.2022.107435
    [24] P. Jiang, D. Ergu, F. Liu, Y. Cai, B. Ma, A Review of Yolo algorithm developments, Proc. Comput. Sci., 199 (2022), 1066–1073. https://doi.org/10.1016/j.procs.2022.01.135 doi: 10.1016/j.procs.2022.01.135
    [25] Z. Zheng, P. Wang, D. Ren, W. Liu, R. Ye, Q. Hu, et al., Enhancing geometric factors in model learning and inference for object detection and instance segmentation, IEEE Trans. Cybern., 52 (2022), 8574–8586. https://doi.org/10.1109/TCYB.2021.3095305 doi: 10.1109/TCYB.2021.3095305
    [26] J. Wang, C. Xu, W. Yang, L. Yu, A normalized gaussian wasserstein distance for tiny object detection, preprint, arXiv: 2110.13389.
    [27] S. Mehta, M. Rastegari, Separable self-attention for mobile vision transformers, preprint, arXiv: 2206.02680.
    [28] G. Huang, Z. Liu, L. Van Der Maaten, K. Q. Weinberger, Densely connected convolutional networks, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 2261–2269. https://doi.org/10.1109/CVPR.2017.243
    [29] X. Pan, C. Ge, R. Lu, S. Song, G. Chen, Z. Huang, et al., On the integration of self-attention and convolution, in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2022), 815–825. https://doi.org/10.1109/CVPR52688.2022.00089
    [30] C. Li, A. Zhou, A. Yao, Omni-dimensional dynamic convolution, preprint, arXiv: 2209.07947.
    [31] S. Woo, J. Park, J. Lee, I. S. Kweon, CBAM: convolution block attention module, preprint, arXiv: 1807.06521.
    [32] S. Mehta, M. Rastegari, MobileViT: Light-weight, general-purpose, and mobile-friendly vision transformer, preprint, arXiv: 2110.02178.
    [33] C. Wang, H. M. Liao, Y. Wu, P. Chen, J. Hsieh, I. Yeh, CSPNet: A new backbone that can enhance learning capability of CNN, in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), (2020), 1571–1580. https://doi.org/10.1109/CVPRW50498.2020.00203
    [34] J. Hu, L. Shen, G. Sun, Squeeze-and-excitation networks, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 7132–7141. https://doi.org/10.1109/CVPR.2018.00745
    [35] J. Fu, H. Zheng, T. Mei, Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 4476–4484. https://doi.org/10.1109/CVPR.2017.476
    [36] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE Trans. Image Process., 13 (2004), 600–612. https://doi.org/10.1109/TIP.2003.819861 doi: 10.1109/TIP.2003.819861
    [37] X. Li, Z. Yang, H. Wu, Face detection based on receptive field enhanced multi-task cascaded convolutional neural networks, IEEE Access, 8 (2020), 174922–174930. https://doi.org/10.1109/ACCESS.2020.3023782 doi: 10.1109/ACCESS.2020.3023782
    [38] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, Grad-CAM: Visual explanations from deep networks via gradient-based localization, in 2017 IEEE International Conference on Computer Vision (ICCV), (2017), 618–626. https://doi.org/10.1109/ICCV.2017.74
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1234) PDF downloads(110) Cited by(2)

Article outline

Figures and Tables

Figures(15)  /  Tables(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog