Aerial image target detection technology has essential application value in navigation security, traffic control and environmental monitoring. Compared with natural scene images, the background of aerial images is more complex, and there are more small targets, which puts higher requirements on the detection accuracy and real-time performance of the algorithm. To further improve the detection accuracy of lightweight networks for small targets in aerial images, we propose a cross-scale multi-feature fusion target detection method (CMF-YOLOv5s) for aerial images. Based on the original YOLOv5s, a bidirectional cross-scale feature fusion sub-network (BsNet) is constructed, using a newly designed multi-scale fusion module (MFF) and cross-scale feature fusion strategy to enhance the algorithm's ability, that fuses multi-scale feature information and reduces the loss of small target feature information. To improve the problem of the high leakage detection rate of small targets in aerial images, we constructed a multi-scale detection head containing four outputs to improve the network's ability to perceive small targets. To enhance the network's recognition rate of small target samples, we improve the K-means algorithm by introducing a genetic algorithm to optimize the prediction frame size to generate anchor boxes more suitable for aerial images. The experimental results show that on the aerial image small target dataset VisDrone-2019, the proposed method can detect more small targets in aerial images with complex backgrounds. With a detection speed of 116 FPS, compared with the original algorithm, the detection accuracy metrics mAP0.5 and mAP0.5:0.95 for small targets are improved by 5.5% and 3.6%, respectively. Meanwhile, compared with eight advanced lightweight networks such as YOLOv7-Tiny and PP-PicoDet-s, mAP0.5 improves by more than 3.3%, and mAP0.5:0.95 improves by more than 1.9%.
Citation: Yang Pan, Jinhua Yang, Lei Zhu, Lina Yao, Bo Zhang. Aerial images object detection method based on cross-scale multi-feature fusion[J]. Mathematical Biosciences and Engineering, 2023, 20(9): 16148-16168. doi: 10.3934/mbe.2023721
Aerial image target detection technology has essential application value in navigation security, traffic control and environmental monitoring. Compared with natural scene images, the background of aerial images is more complex, and there are more small targets, which puts higher requirements on the detection accuracy and real-time performance of the algorithm. To further improve the detection accuracy of lightweight networks for small targets in aerial images, we propose a cross-scale multi-feature fusion target detection method (CMF-YOLOv5s) for aerial images. Based on the original YOLOv5s, a bidirectional cross-scale feature fusion sub-network (BsNet) is constructed, using a newly designed multi-scale fusion module (MFF) and cross-scale feature fusion strategy to enhance the algorithm's ability, that fuses multi-scale feature information and reduces the loss of small target feature information. To improve the problem of the high leakage detection rate of small targets in aerial images, we constructed a multi-scale detection head containing four outputs to improve the network's ability to perceive small targets. To enhance the network's recognition rate of small target samples, we improve the K-means algorithm by introducing a genetic algorithm to optimize the prediction frame size to generate anchor boxes more suitable for aerial images. The experimental results show that on the aerial image small target dataset VisDrone-2019, the proposed method can detect more small targets in aerial images with complex backgrounds. With a detection speed of 116 FPS, compared with the original algorithm, the detection accuracy metrics mAP0.5 and mAP0.5:0.95 for small targets are improved by 5.5% and 3.6%, respectively. Meanwhile, compared with eight advanced lightweight networks such as YOLOv7-Tiny and PP-PicoDet-s, mAP0.5 improves by more than 3.3%, and mAP0.5:0.95 improves by more than 1.9%.
[1] | D. Christine, A. P. S. Chen, H. J. Christanto, Deep learning for highly accurate hand recognition based on YOLOv7 model, Big Data Cogn. Comput., 7 (2023), 53. https://doi.org/10.3390/bdcc7010053 doi: 10.3390/bdcc7010053 |
[2] | Y. Zhang, J. Chu, L. Leng, J. Miao, Mask-Refined R-CNN: A network for refining object details in instance segmentation, Sensors, 20 (2020), 1010. https://doi.org/10.3390/s20041010 doi: 10.3390/s20041010 |
[3] | M. Rostami, S. Forouzandeh, K. Berahmand, M. Soltani, Integration of multi-objective PSO based feature selection and node centrality for medical datasets, Genomics, 112 (2020), 3943–3950. https://doi.org/10.1016/j.ygeno.2020.07.027 doi: 10.1016/j.ygeno.2020.07.027 |
[4] | L. A. Varga, B. Kiefer, M. Messmer, A. Zell, SeaDronesSee: A maritime benchmark for detecting humans in open water, in 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), (2022), 3686–3696. https://doi.org/10.1109/WACV51458.2022.00374 |
[5] | W. Li, J. Qiang, X. Li, P. Guan, Y. Du, UAV image small object detection based on composite backbone network, Mobile Inf. Syst., 2022 (2022), 11. https://doi.org/10.1155/2022/7319529 doi: 10.1155/2022/7319529 |
[6] | Y. Cheng, H. Xu, Y. Liu, Robust small object detection on the water surface through fusion of camera and millimeter wave radar, in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), (2021), 15243–15252. https://doi.org/10.1109/ICCV48922.2021.01498 |
[7] | J. Ding, N. Xue, G. S. Xia, X. Bai, W. Yang, M. Y. Yang, et al., Object detection in aerial images: A large-scale benchmark and challenges, IEEE Trans. Pattern Anal. Mach. Intell., 44 (2022), 7778–7796. https://doi.org/10.1109/TPAMI.2021.3117983 doi: 10.1109/TPAMI.2021.3117983 |
[8] | S. Ren, K. He, R. Girshick, J. Sun, Faster R-CNN: Towards real-time object detection with region proposal networks, IEEE Trans. Pattern Anal. Mach. Intell., 36 (2017), 1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031 doi: 10.1109/TPAMI.2016.2577031 |
[9] | J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: Unified, real time object detection, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 779–788. https://doi.org/10.1109/CVPR.2016.91 |
[10] | M. Liu, X. Wang, A. Zhou, X. Fu, Y. Ma, C. Piao, UAV-YOLO: Small object detection on unmanned aerial vehicle perspective, Sensors, 20 (2020), 2238. https://doi.org/10.3390/s20082238 doi: 10.3390/s20082238 |
[11] | X. Liang, J. Zhang, L. Zhuo, Y. Li, Q. Tian, Small object detection in unmanned aerial vehicle images using feature fusion and scaling-based single shot detector with spatial context analysis, IEEE Trans. Circuits Syst. Video Technol., 30 (2020), 1758–1770. https://doi.org/10.1109/TCSVT.2019.2905881 doi: 10.1109/TCSVT.2019.2905881 |
[12] | X. Liu, J. Huang, T. Yang, Q. Wang, Improved small object detection for UAV acquisition based on CenterNet, Comput. Eng. Appl., 58 (2022), 96–104. |
[13] | Y. Huang, H. Cui, J. Ma, Y. Hao, Research on an aerial object detection algorithm based on improved YOLOv5, in 2022 3rd International Conference on Computer Vision, Image and Deep Learning & International Conference on Computer Engineering and Applications (CVIDL & ICCEA), (2022), 396–400. https://doi.org/10.1109/CVIDLICCEA56201.2022.9825196 |
[14] | G. Xu, G. Mao, Aerial image object detection of UAV based on multi-level feature fusion, J. Front. Comput. Sci. Technol., 17 (2023), 635–645. https://doi.org/10.3778/j.issn.1673-9418.2205114 doi: 10.3778/j.issn.1673-9418.2205114 |
[15] | Z. Liu, X. Zhang, C. Liu, H. Wang, C. Sun, B. Li, et al., RelationRS: Relationship representation network for object detection in aerial images, Remote Sens., 14 (2022), 1862. https://doi.org/10.3390/rs14081862 doi: 10.3390/rs14081862 |
[16] | J. Chu, Z. Guo, L. Leng, Object detection based on multi-layer convolution feature fusion and online hard example mining, IEEE Access, 6 (2018), 19959–19967. https://doi.org/10.1109/ACCESS.2018.2815149 doi: 10.1109/ACCESS.2018.2815149 |
[17] | R. Sheikhpour, K. Berahmand, S. Forouzandeh, Hessian-based semi-supervised feature selection using generalized uncorrelated constraint, Knowledge-Based Syst., 269, (2023), 110521. https://doi.org/10.1016/j.knosys.2023.110521 |
[18] | T. Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, S. Belongie, Feature pyramid networks for object detection, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 936–944. https://doi.org/10.1109/CVPR.2017.106 |
[19] | S. Liu, L. Qi, H. Qin, J. Shi, J. Jia, Path aggregation network for instance segmentation, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 8759–8768. https://doi.org/10.1109/CVPR.2018.00913 |
[20] | M. Tan, R. Pang, Q. V. Le, Efficientdet: Scalable and efficient object detection, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2020), 10781–10790. https://doi.org/10.1109/CVPR42600.2020.01079 |
[21] | G. Jocher, A. Chaurasia, New YOLOv5 Classification Models, 2022. Available from: https://github.com/ultralytics/yolov5/tree/v6.2. |
[22] | S. Liu, D. Huang, Y. Wang, Learning spatial fusion for single-shot object detection, preprint, arXiv: 1911.09516. |
[23] | J. Redmon, A. Farhadi, YOLO9000: Better, faster, stronger, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 6517–6525. https://doi.org/10.1109/CVPR.2017.690 |
[24] | Z. Tian, C. Shen, H. Chen, T. He, FCOS: Fully convolutional one-stage object detection, in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), (2019), 9626–9635. https://doi.org/10.1109/ICCV.2019.00972 |
[25] | D. Du, P. Zhu, L. Wen, X. Bian, H. Lin, Q. Hu, et al., VisDrone-DET2019: The vision meets drone object detection in image challenge results, in 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), (2019), 213–226. https://doi.org/10.1109/ICCVW.2019.00030 |
[26] | T. Y. Lin, M. Maire, S. J. Belongie, J. Hays, P. Perona, D. Ramanan, Microsoft COCO: Common objects in context, in 13th European Conference on Computer Vision, (2014), 740–755. https://doi.org/10.1007/978-3-319-10602-1_48 |
[27] | Z. Zhang, H. Yi, J. Zheng, Focusing on small objects detector in aerial images, Acta Electron. Sin., 51 (2023), 944–955. https://doi.org/10.12263/DZXB.20220313 doi: 10.12263/DZXB.20220313 |
[28] | M. Tan, Q. Le, Efficientnet: Rethinking model scaling for convolutional neural networks, in International Conference on Machine Learning, PMLR, (2019), 6105–6114. |
[29] | A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, MobileNets: efficient convolutional neural networks for mobile vision applications, preprint, arXiv: 1704.04861. |
[30] | J. Redmon, A. Farhadi, YOLOv3: An incremental improvement, preprint, arXiv: 1804.02767. |
[31] | Z. Ge, S. Liu, F. Wang, Z. Li, J. Sun, YOLOX: Exceeding YOLO series in 2021, preprint, arXiv: 2107.08430. |
[32] | G. Yu, Q. Chang, W. Lv, C. Xu, C. Cui, W. Ji, et al., PP-PicoDet: A better real-time object detector on mobile devices, preprint, arXiv: 2111.00902. |
[33] | C. Y. Wang, A. Bochkovskiy, H. Y. M. Liao, YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2023), 7464–7475. |
[34] | S. Xu, X. Wang, W. Lv, Q. Chang, C. Cui, K. Deng, et al., PP-YOLOE: An evolved version of YOLO, preprint, arXiv: 2203.16250. |