Research article Special Issues

Super-resolution reconstruction algorithm for dim and blurred traffic sign images in complex environments

  • Received: 30 January 2024 Revised: 25 March 2024 Accepted: 08 April 2024 Published: 22 April 2024
  • MSC : 68T45

  • In poor lighting and rainy and foggy bad weather environments, road traffic signs are blurred and have low recognition, etc. A super-resolution reconstruction algorithm for complex lighting and bad weather traffic sign images was proposed. First, a novel attention residual module was designed to incorporate an aggregated feature attention mechanism on the jump connection side of the base residual module so that the deep network can obtain richer detail information; second, a cross-layer jump connection feature fusion mechanism was adopted to enhance the flow of information across layers as well as to prevent the problem of gradient disappearance of the deep network to enhance the reconstruction of the edge detail information; and lastly, a positive-inverse dual-channel sub-pixel convolutional up-sampling method was designed to reconstruct super-resolution images to obtain better pixel and spatial information expression. The evaluation model was trained on the Chinese traffic sign dataset in a natural scene, and when the scaling factor is 4, the average values of PSNR and SSIM are improved by 0.031 when compared with the latest release of the deep learning-based super-resolution reconstruction algorithm for single-frame images, MICU (Multi-level Information Compensation and U-net), the average values of PSNR and SSIM are improved by 0.031 dB and 0.083, and the actual test average reaches 20.946 dB and 0.656. The experimental results show that the reconstructed image quality of this paper's algorithm is better than the mainstream algorithms of comparison in terms of objective indexes and subjective feelings. The super-resolution reconstructed image has a higher peak signal-to-noise ratio and perceptual similarity. It can provide certain technical support for the research of safe driving assistive devices in natural scenes under multi-temporal varying illumination conditions and bad weather.

    Citation: Yan Ma, Defeng Kong. Super-resolution reconstruction algorithm for dim and blurred traffic sign images in complex environments[J]. AIMS Mathematics, 2024, 9(6): 14525-14548. doi: 10.3934/math.2024706

    Related Papers:

  • In poor lighting and rainy and foggy bad weather environments, road traffic signs are blurred and have low recognition, etc. A super-resolution reconstruction algorithm for complex lighting and bad weather traffic sign images was proposed. First, a novel attention residual module was designed to incorporate an aggregated feature attention mechanism on the jump connection side of the base residual module so that the deep network can obtain richer detail information; second, a cross-layer jump connection feature fusion mechanism was adopted to enhance the flow of information across layers as well as to prevent the problem of gradient disappearance of the deep network to enhance the reconstruction of the edge detail information; and lastly, a positive-inverse dual-channel sub-pixel convolutional up-sampling method was designed to reconstruct super-resolution images to obtain better pixel and spatial information expression. The evaluation model was trained on the Chinese traffic sign dataset in a natural scene, and when the scaling factor is 4, the average values of PSNR and SSIM are improved by 0.031 when compared with the latest release of the deep learning-based super-resolution reconstruction algorithm for single-frame images, MICU (Multi-level Information Compensation and U-net), the average values of PSNR and SSIM are improved by 0.031 dB and 0.083, and the actual test average reaches 20.946 dB and 0.656. The experimental results show that the reconstructed image quality of this paper's algorithm is better than the mainstream algorithms of comparison in terms of objective indexes and subjective feelings. The super-resolution reconstructed image has a higher peak signal-to-noise ratio and perceptual similarity. It can provide certain technical support for the research of safe driving assistive devices in natural scenes under multi-temporal varying illumination conditions and bad weather.



    加载中


    [1] K. Zhou, Y. Zhan, D. Fu, Learning region-based attention network for traffic sign recognition, Sensors, 21 (2021), 686. https://doi.org/10.3390/s21030686 doi: 10.3390/s21030686
    [2] Z. Liu, Y. Cai, H. Wang, L. Chen, H. Gao, Y. Jia, et al., Robust target recognition and tracking of self-driving cars with radar and camera information fusion under severe weather conditions. IEEE T. Intell. Transp. Syst., 23 (2021), 6640–6653. https://doi.org/10.1109/TITS.2021.3059674 doi: 10.1109/TITS.2021.3059674
    [3] M. Hnewa, H. Radha, Object detection under rainy conditions for autonomous vehicles: A review of state-of-the-art and emerging techniques, IEEE Signal Proc. Mag., 38 (2020), 53–67. https://doi.org/10.1109/MSP.2020.2984801 doi: 10.1109/MSP.2020.2984801
    [4] O. Soufi, Z. Aarab, F. Belouadha, Benchmark of deep learning models for single image super-resolution (SISR), In: 2022 2nd International Conference on Innovative Research in Applied Science, Engineering and Technology (IRASET), 2022. https://doi.org/10.1109/IRASET52964.2022.9738274
    [5] K. Li, S. Yang, R. Dong, X. Wang, J. Huang, Survey of single image super‐resolution reconstruction, IET Image Processing, 14 (2022), 2273–2290. https://doi.org/10.1049/iet-ipr.2019.1438 doi: 10.1049/iet-ipr.2019.1438
    [6] D. Qiu, Y. Cheng, X. Wang, Medical image super-resolution reconstruction algorithms based on deep learning: A survey, Comput. Meth. Prog. Bio., 238 (2023), 107590. https://doi.org/10.1016/j.cmpb.2023.107590 doi: 10.1016/j.cmpb.2023.107590
    [7] L. Zhang, R. Dong, S. Yuan, W. Li, J. Zheng, H. Fu, Making low-resolution satellite images reborn: A deep learning approach for super-resolution building extraction, Remote Sens., 13 (2021), 2872. https://doi.org/10.3390/rs13152872 doi: 10.3390/rs13152872
    [8] H. Chen, X. He, L. Qing, Y. Wu, C. Ren, R. E. Sheriff, et al., Real-world single image super-resolution: A brief review, Inform. Fusion, 79 (2022), 124–145. https://doi.org/10.1016/j.inffus.2021.09.005 doi: 10.1016/j.inffus.2021.09.005
    [9] S. C. Park, M. K. Park, M. G. Kang, Super-resolution image reconstruction: A technical overview, IEEE Signal Proc. Mag., 20 (2003), 21–36. https://doi.org/10.1109/MSP.2003.1203207 doi: 10.1109/MSP.2003.1203207
    [10] D. O. Baguer, J. Leuschner, M. Schmidt, Computed tomography reconstruction using deep image prior and learned reconstruction methods, Inverse Probl., 36 (2020), 094004. https://doi.org/10.1088/1361-6420/aba415 doi: 10.1088/1361-6420/aba415
    [11] J. Xiao, H. Yong, L. Zhang, Degradation model learning for real-world single image super-resolution, In: Computer Vision–ACCV 2020, 2020. https://doi.org/10.1007/978-3-030-69532-3_6
    [12] P. Wu, J. Liu, M. Li, Y. Sun, F. Shen, Fast sparse coding networks for anomaly detection in videos, Pattern Recogn., 107 (2020), 107515. https://doi.org/10.1016/j.patcog.2020.107515 doi: 10.1016/j.patcog.2020.107515
    [13] J. Li, S. Wei, W. Dai, Combination of manifold learning and deep learning algorithms for mid-term electrical load forecasting, IEEE T. Neur. Net. Lear. Syst., 34 (2023), 2584–2593. https://doi.org/10.1109/TNNLS.2021.3106968 doi: 10.1109/TNNLS.2021.3106968
    [14] F. Deeba, S. Kun, F. Ali Dharejo, Y. Zhou, Sparse representation based computed tomography images reconstruction by coupled dictionary learning algorithm, IET Image Process., 14 (2020), 2365–2375. https://doi.org/10.1049/iet-ipr.2019.1312 doi: 10.1049/iet-ipr.2019.1312
    [15] C. Dong, C. C. Loy, K. He, X. Tang, Learning a deep convolutional network for image super-resolution, In: Computer Vision–ECCV 2014, 2014,184–199. https://doi.org/10.1007/978-3-319-10593-2_13
    [16] J. Kim, J. K. Lee, K. M. Lee, Deeply-recursive convolutional network for image super-resolution, In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, 1637–1645. https://doi.org/10.1109/CVPR.2016.181
    [17] Y. Tai, J. Yang, X. Liu, Image super-resolution via deep recursive residual network, In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, 2790–2798. https://doi.org/10.1109/CVPR.2017.298
    [18] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, Y. Fu, Image super-resolution using very deep residual channel attention networks, In: Computer Vision–ECCV 2018, 2018. 294–310. https://doi.org/10.1007/978-3-030-01234-2_18
    [19] T. Dai, J. Cai, Y. Zhang, S. T. Xia, L. Zhang, Second-order attention network for single image super-resolution, In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, 11057–11066. https://doi.org/10.1109/cvpr.2019.01132
    [20] P. Wei, Z. Xie, H. Lu, Z. Zhan, Q. Ye, W. Zuo, et al., Component divide-and-conquer for real-world image super-resolution, In: Computer Vision–ECCV 2020, 2020,101–117. https://doi.org/10.1007/978-3-030-58598-3_7
    [21] W. Zhang, W. Zhao, J. Li, P. Zhuang, H. Sun, Y. Xu, et al., CVANet: Cascaded visual attention network for single image super-resolution, Neural Networks, 170 (2024), 622–634. https://doi.org/10.1016/j.neunet.2023.11.049 doi: 10.1016/j.neunet.2023.11.049
    [22] Y. Wang, S. Jin, Z. Yang, H. Guan, Y. Ren, K. Cheng, et al., TTSR: A transformer-based topography neural network for digital elevation model super-resolution, IEEE T. Geosci. Remote Sens., 62 (2024), 4403179. https://doi.org/10.1109/TGRS.2024.3360489 doi: 10.1109/TGRS.2024.3360489
    [23] Y. Chen, R. Xia, K. Yang, K. Zou, MICU: Image super-resolution via multi-level information compensation and U-net, Expert Syst. Appl., 245 (2024), 123111. https://doi.org/10.1016/j.eswa.2023.123111 doi: 10.1016/j.eswa.2023.123111
    [24] Z. H. Qu, Y. M. Shao, T. M. Deng, J. Zhu, X. H. Song, Traffic sign detection and recognition under complex lighting conditions, Laser. Optoelectron. P., 56 (2019), 231009. https://doi.org/10.3788/LOP56.231009 doi: 10.3788/LOP56.231009
    [25] X. G. Zhang, X. L. Liu, J. Li, H. D. Wang, Real-time detection and recognition of speed limit traffic signs under BP neural network, J. Xidian Univ., 45 (2018), 136–142. https://doi.org/10.3969/j.issn.1001-2400.2018.05.022 doi: 10.3969/j.issn.1001-2400.2018.05.022
    [26] G. Z. Xu, Y. Zhou, B. Dong, C. C. Liao, Traffic signage recognition based on improved cascade R-CNN. Sens. Microsyst., 40 (2021), 142–145+153. https://doi.org/10.13873/j.1000-9787(2021)05-0142-04 doi: 10.13873/j.1000-9787(2021)05-0142-04
    [27] L. Liu, S. Lu, R. Zhong, B. Wu, Y. Yao, Q. Zhang, et al., Computing systems for autonomous driving: State of the art and challenges, IEEE Internet Things J., 8 (2021), 6469–6486. https://doi.org/10.1109/JIOT.2020.3043716 doi: 10.1109/JIOT.2020.3043716
    [28] H. Singh, A. Kathuria, Analyzing driver behavior under naturalistic driving conditions: A review. Accident Anal. Prev., 150 (2021), 105908. https://doi.org/10.1016/j.aap.2020.105908 doi: 10.1016/j.aap.2020.105908
    [29] S. Woo, J. Park, J. Y. Lee, I. S. Kweon, CBAM: Convolutional block attention module, In: Computer Vision–ECCV 2018, 2018, 3–19. https://doi.org/10.1007/978-3-030-01234-2_1
    [30] J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, F. F. Li, Imagenet: A large-scale hierarchical image database, In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009,248–255. https://doi.org/10.1109/CVPR.2009.5206848
    [31] X. Wang, K. Yu, C. Dong, C. C. Loy, Recovering realistic texture in image super-resolution by deep spatial feature transform, In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018,606–615. https://doi.org/10.1109/CVPR.2018.00070
    [32] A. Ignatov, N. Kobyshev, R. Timofte, K. Vanhoey, DSLR-quality photos on mobile devices with deep convolutional networks, In: 2017 IEEE International Conference on Computer Vision (ICCV), 2017, 3297–3305. https://doi.org/10.1109/ICCV.2017.355
    [33] J. Hu, L. Shen, S. Albanie, G. Sun, E. Wu, Squeeze-and-excitation networks, IEEE T. Pattern. Anal., 42 (2020), 2011–2023. https://doi.org/10.1109/tpami.2019.2913372 doi: 10.1109/tpami.2019.2913372
    [34] Z. Cui, N. Wang, Y. Su, W. Zhang, Y. Lan, A. Li, ECANet: Enhanced context aggregation network for single image dehazing, Signal Image Video P., 17 (2023), 471–479. https://doi.org/10.1007/s11760-022-02252-w doi: 10.1007/s11760-022-02252-w
    [35] J. Xu, Z. Li, B. Du, M. Zhang, J. Liu, Reluplex made more practical: Leaky ReLU, In: 2020 IEEE Symposium on Computers and Communications (ISCC), 2020, 1–7. https://doi.org/10.1109/ISCC50000.2020.9219587
    [36] F. Nie, H. Huang, X. Cai, C. Ding, Efficient and robust feature selection via joint ℓ2, 1-norms minimization, Adv. Neural Inform. Processing Syst., 2010.
    [37] A. Hore, D. Ziou, Image quality metrics: PSNR vs. SSIM, In: 2010 20th International Conference on Pattern Recognition, 2010, 2366–2369. https://doi.org/10.1109/ICPR.2010.579
    [38] D. Han, Comparison of commonly used image interpolation methods, In: Proceedings of the 2nd International Conference on Computer Science and Electronics Engineering (ICCSEE 2013), 2013, 1556–1559. https://doi.org/10.2991/iccsee.2013.391
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(246) PDF downloads(29) Cited by(0)

Article outline

Figures and Tables

Figures(13)  /  Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog