Research article Special Issues

Defect detection in code characters with complex backgrounds based on BBE


  • Received: 11 March 2021 Accepted: 26 April 2021 Published: 29 April 2021
  • Computer vision technologies have been widely implemented in the defect detection. However, most of the existing detection methods generally require images with high quality, and they can only process code characters on simple backgrounds with high contrast. In this paper, a defect detection approach based on deep learning has been proposed to efficiently perform defect detection of code characters on complex backgrounds with a high accuracy. Specifically, image processing algorithms and data enhancement techniques were utilized to generate a large number of defect samples to construct a large data set featuring a balanced positive and negative sample ratio. The object detection network called BBE was build based on the core module of EfficientNet. Experimental results show that the mAP of the model and the accuracy reach 0.9961 and 0.9985, respectively. Individual character detection results were screened by setting relevant quality inspection standards to evaluate the overall quality of the code characters, the results of which have verified the effectiveness of the proposed method for industrial production. Its accuracy and speed are high with high robustness and transferability to other similar defect detection tasks. To the best of our knowledge, this report describes the first time that the BBE has been applied to defect inspections for real plastic container industry.

    Citation: Jianzhong Peng, Wei Zhu, Qiaokang Liang, Zhengwei Li, Maoying Lu, Wei Sun, Yaonan Wang. Defect detection in code characters with complex backgrounds based on BBE[J]. Mathematical Biosciences and Engineering, 2021, 18(4): 3755-3780. doi: 10.3934/mbe.2021189

    Related Papers:

  • Computer vision technologies have been widely implemented in the defect detection. However, most of the existing detection methods generally require images with high quality, and they can only process code characters on simple backgrounds with high contrast. In this paper, a defect detection approach based on deep learning has been proposed to efficiently perform defect detection of code characters on complex backgrounds with a high accuracy. Specifically, image processing algorithms and data enhancement techniques were utilized to generate a large number of defect samples to construct a large data set featuring a balanced positive and negative sample ratio. The object detection network called BBE was build based on the core module of EfficientNet. Experimental results show that the mAP of the model and the accuracy reach 0.9961 and 0.9985, respectively. Individual character detection results were screened by setting relevant quality inspection standards to evaluate the overall quality of the code characters, the results of which have verified the effectiveness of the proposed method for industrial production. Its accuracy and speed are high with high robustness and transferability to other similar defect detection tasks. To the best of our knowledge, this report describes the first time that the BBE has been applied to defect inspections for real plastic container industry.



    加载中


    [1] Y. Qin, Z. Zhang, Summary of scene text detection and recognition, in 2020 15th IEEE Conference on Industrial Electronics and Applications, (2020), 85-89.
    [2] L. Shufeng, S. Shaohong, S. Zhiyuan, Research on Chinese characters recognition in complex background images, in 2017 2nd International Conference on Image, Vision and Computing (ICIVC), (2017), 214-217.
    [3] B. Wang, C. L. P. Chen, License plate character segmentation using key character location and projection analysis, in 2018 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC), (2018), 510-514.
    [4] R. A. P. Putro, F. P. Putri, M. I. Prasetiyowati, A combined edge detection analysis and clustering based approach for real time text detection, in 2019 5th International Conference on New Media Studies (CONMEDIA), (2019), 59-62.
    [5] V. V. Rampurkar, S. K. Shah, G. J. Chhajed, S. K. Biswash, An approach towards text detection from complex images using morphological techniques, in 2018 2nd International Conference on Inventive Systems and Control (ICISC), (2018), 969-973.
    [6] B. Tan, Q. Peng, X. Yao, C. Hu, Z. Xu, Character recognition based on corner detection and convolution neural network, in 2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), (2017), 503-507.
    [7] T. Zheng, X. Wang, X. Yuan, S. Wang, A Novel Method based on Character Segmentation for Slant Chinese Screen-render Text Detection and Recognition, in 2020 15th IEEE Conference on Industrial Electronics and Applications (ICIEA), (2020), 950-954.
    [8] R. Bagi, T. Dutta, H. P. Gupta, Cluttered textspotter: An end-to-end trainable light-weight scene text spotter for cluttered environment, IEEE Access, 8 (2020), 111433-111447. doi: 10.1109/ACCESS.2020.3002808
    [9] S. Y. Arafat, M. J. Iqbal, Urdu-text detection and recognition in natural scene images using deep learning, IEEE Access, 8 (2020), 96787-96803. doi: 10.1109/ACCESS.2020.2994214
    [10] Y. S. Chernyshova, A. V. Sheshkus, V. V. Arlazarov, Two-step CNN framework for text line recognition in camera-captured images, IEEE Access, 8 (2020), 32587-32600. doi: 10.1109/ACCESS.2020.2974051
    [11] L. Cao, H. Li, R. Xie, J. Zhu, A text detection algorithm for image of student exercises based on CTPN and enhanced YOLOv3, IEEE Access, 8 (2020), 176924-176934. doi: 10.1109/ACCESS.2020.3025221
    [12] Z. Tian, W. Huang, T. He, P. He, Y. Qiao, Detecting text in natural image with connectionist text proposal network, in European conference on computer vision, (2016), 56-72.
    [13] X. Ren, Y. Zhou, Z. Huang, J. Sun, X. Yang, K. Chen, A novel text structure feature extractor for Chinese scene text detection and recognition, IEEE Access, 5 (2017), 3193-3204. doi: 10.1109/ACCESS.2017.2676158
    [14] R. C. Gonzalez, R. E.Woods, Q. Ruan, Y. Ruan, Digital Image Processing, 3rd edition, China edition, Publishing House of Electronics Industry, Beijing, (2017), 404-411.
    [15] Z. Zou, Z. Shi, Y. Guo, J. Ye, Object detection in 20 years: A survey, preprint, arXiv: 1905.05055.
    [16] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, Proc. IEEE, 86 (1998), 2278-2324. doi: 10.1109/5.726791
    [17] A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, Commun. ACM, 60 (2017), 84-90. doi: 10.1145/3065386
    [18] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, preprint, arXiv: 1409.1556.
    [19] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, et al., Going deeper with convolutions, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2015), 1-9.
    [20] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2016), 770-778.
    [21] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, K. Keutzer, SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5MB model size, preprint, arXiv: 1602.07360.
    [22] F. Chollet, Xception: Deep learning with depthwise separable convolutions, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2017), 1251-1258.
    [23] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, et al., Mobilenets: Efficient convolutional neural networks for mobile vision applications, preprint, arXiv: 1704.04861.
    [24] Y. Ma, M. Huang, Study of digit recognition based on BP neural network, Inf. Technol., 4 (2007), 87-91.
    [25] R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2014), 580-587.
    [26] K. He, X. Zhang, S. Ren, J. Sun, Spatial pyramid pooling in deep convolutional networks for visual recognition, IEEE TPAMI, 37 (2015), 1904-1916. doi: 10.1109/TPAMI.2015.2389824
    [27] R. Girshick, Fast r-cnn, in Proceedings of the IEEE international conference on computer vision, (2015), 1440-1448.
    [28] S. Ren, K. He, R. Girshick, J. Sun, Faster r-cnn: Towards real-time object detection with region proposal networks, preprint, arXiv: 1506.01497.
    [29] K. He, G. Gkioxari, P. Dollár, R. Girshick, Mask r-cnn, in Proceedings of the IEEE international conference on computer vision, (2017), 2961-2969.
    [30] J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: Unified, real-time object detection, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2016), 779-788.
    [31] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, et al., Ssd: Single shot multibox detector, in European conference on computer vision, (2016), 21-37.
    [32] M. Tan, Q. V. Le, Efficientnet: Rethinking model scaling for convolutional neural networks, preprint, arXiv: 1905.11946.
    [33] P. Ramachandran, B. Zoph, Q. V. Le, Searching for activation functions, preprint, arXiv: 1710.05941.
    [34] X. Y. Yin, J. Goudriaan, E. A. Lantinga, J. Vos, H. J. Spiertz, A flexible sigmoid function of determinate growth, Ann. Bot., 91 (2003), 361-371. doi: 10.1093/aob/mcg029
    [35] X. Glorot, A. Bordes, Y. Bengio, Deep sparse rectifier neural networks, in Proceedings of the fourteenth international conference on artificial intelligence and statistics, (2011), 315-323.
    [36] S. Ioffe, C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, preprint, arXiv: 1502.03167.
    [37] M. Tan, R. Pang, Q. V. Le, Efficientdet: Scalable and efficient object detection, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2020), 10781-10790.
    [38] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, et al., Tensorflow: A system for large-scale machine learning, in 12th {USENIX} symposium on operating systems design and implementation ({OSDI} 16), (2016), 265-283.
    [39] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980.
    [40] L. Bottou, Large-scale machine learning with stochastic gradient descent, in Proceedings of COMPSTAT'2010. Physica-Verlag HD, (2010), 177-186.
    [41] J. Redmon, A. Farhadi, Yolov3: An incremental improvement, preprint, arXiv: 1804.02767.
    [42] Z. Jiang, L. Zhao, S. Li, Y. Jia, Real-time object detection method based on improved YOLOv4-tiny, preprint, arXiv: 2011.04244.
    [43] J. Glenn, Yolov5, 2021. Available from: https://github.com/ultralytics/yolov5.
    [44] X. Zhou, D. Wang, P. Krähenbühl, Objects as points, preprint, arXiv: 1904.07850.
    [45] N. Cai, Y. Chen, G. Liu, G. Cen, H. Wang, X. Chen, A vision-based character inspection system for tire mold, Assem. Autom., 37 (2017), 230-237. doi: 10.1108/AA-07-2016-066
    [46] X. Yang, J. Zhang, Q. Zhou, A polynomial self-optimizing method for the character correction of distortion prints, Packag. Eng., 39 (2018), 187-193.
    [47] F. Lei, K. Sun, X. Wang, Research on milk production date recognition based on improved LeNet-5, Comput. Technol. Dev., 30 (2020), 96-99.
    [48] Y. Zhou, T. Shen, Z. Xi, Y. Yang, Z. Qiu, Research on quality detection algorithms based on halcon for bottom spray code of daily chemical bottles, Instrum. Tech. Sens., 11 (2020), 101-104.
    [49] H. Wang, L. Zhu, Z. Pan, Method for the detection of the piston side defect based on external contour registration, J. XiDian Univ., 46 (2019), 75-83.
    [50] C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, C. Liu, A survey on deep transfer learning, in International conference on artificial neural networks, (2018), 270-279.
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3340) PDF downloads(153) Cited by(2)

Article outline

Figures and Tables

Figures(17)  /  Tables(10)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog