In the chemical industry, the ethylene cracking furnace is the core ethylene production equipment, and its safe and stable operation must be ensured. The fire gate is the only observation window to understand the high temperature operating conditions inside the cracking furnace. In the automatic monitoring process of ethylene production, the accurate identification of the opening and closing status of the fire door is particularly important. Through the research on the ethylene cracking production process, based on deep learning, the open and closed state of the fire gate is recognized and studied. First of all, a series of preprocessing and augmentation are performed on the originally collected image data of the fire gate. Then, a recognition model is constructed based on convolutional neural network, and the preprocessed data is used to train the model. Optimization algorithms such as Adam are used to update the model parameters to improve the generalization ability of the model. Finally, the proposed recognition model is verified based on the test set and is compared with the transfer learning model. The experimental results show that the proposed model can accurately recognize the open state of the fire door and is more stable than the migration learning model.
Citation: Qirui Li, Baikun Zhang, Delong Cui, Zhiping Peng, Jieguang He. The research of recognition of peep door open state of ethylene cracking furnace based on deep learning[J]. Mathematical Biosciences and Engineering, 2022, 19(4): 3472-3486. doi: 10.3934/mbe.2022160
In the chemical industry, the ethylene cracking furnace is the core ethylene production equipment, and its safe and stable operation must be ensured. The fire gate is the only observation window to understand the high temperature operating conditions inside the cracking furnace. In the automatic monitoring process of ethylene production, the accurate identification of the opening and closing status of the fire door is particularly important. Through the research on the ethylene cracking production process, based on deep learning, the open and closed state of the fire gate is recognized and studied. First of all, a series of preprocessing and augmentation are performed on the originally collected image data of the fire gate. Then, a recognition model is constructed based on convolutional neural network, and the preprocessed data is used to train the model. Optimization algorithms such as Adam are used to update the model parameters to improve the generalization ability of the model. Finally, the proposed recognition model is verified based on the test set and is compared with the transfer learning model. The experimental results show that the proposed model can accurately recognize the open state of the fire door and is more stable than the migration learning model.
[1] | Z. Peng, J. He, D. Cui, Q. Li, J. Qiu, Study of dual-phase drive synchronization method and temperature measurement algorithm for measuring external surface temperatures of ethylene cracking furnace tubes, Appl. Petrochem. Res., 8 (2018), 163–172. https://doi.org/10.1007/s13203-018-0205-x doi: 10.1007/s13203-018-0205-x |
[2] | Y. Li, W. Chen, H. Yan, X. Li, Learning graph-based embedding for personalized product recommendation, Chin. J. Comput., 42 (2019), 1767–1778. https://doi.org/10.11897/SP.J.1016.2019.01767 doi: 10.11897/SP.J.1016.2019.01767 |
[3] | C. Yan, C. Wang, Development and application of convolutional neural network model, J. Front. Comput. Sci. Technol., 15 (2021), 27–46. https://doi.org/10.3778/j.issn.1673-9418.2008016 doi: 10.3778/j.issn.1673-9418.2008016 |
[4] | J. Galvis, S. Morales, C. Kasmi, F. Vega, Denoising of video frames resulting from video interface leakage using deep learning for efficient optical character recognitionn, in IEEE Letters on Electromagnetic Compatibility Practice and Applications, 3 (2021), 82–86. https://doi.org/10.1109/LEMCPA.2021.3073663 |
[5] | X. Liu, B. Hu, Q. Chen, X. Wu, J. You, Stroke sequence-dependent deep convolutional neural network for online handwritten chinese character recognition, IEEE Trans. Neural Netw. Learn. Syst., 31 (2020), 4637–4648. https://doi.org/10.1109/TNNLS.2019.2956965 doi: 10.1109/TNNLS.2019.2956965 |
[6] | R. Khanna, D. Oh, Y. Kim, Through-wall remote human voice recognition using doppler radar with transfer learning, IEEE Sens. J., 19 (2019), 4571–4576. https://doi.org/10.1109/JSEN.2019.2901271 doi: 10.1109/JSEN.2019.2901271 |
[7] | B. Sisman, J. Yamagishi, S. King, H. Li, An overview of voice conversion and its challenges: from statistical modeling to deep learning, IEEE-ACM Trans. Audio Speech Lang., 29 (2021), 132–157. https://doi.org/10.1109/TASLP.2020.3038524 doi: 10.1109/TASLP.2020.3038524 |
[8] | R. He, X. Wu, Z. Sun, T. Tan, Wasserstein CNN: learning invariant features for NIR-VIS face recognition, IEEE Trans. Pattern Anal. Mach. Intell., 41 (2019), 1761–1773. https://doi.org/10.1109/TPAMI.2018.2842770 doi: 10.1109/TPAMI.2018.2842770 |
[9] | L. Zhang, J. Liu, B. Zhang, D. Zhang, C. Zhu, Deep cascade model-based face recognition: when deep-layered learning meets small data, IEEE Trans. Image Process., 29 (2020), 1016–1029. https://doi.org/10.1109/TIP.2019.2938307 doi: 10.1109/TIP.2019.2938307 |
[10] | H. Li, P. Wang, C. Shen, Toward end-to-end car license plate detection and recognition with deep neural networks, IEEE Trans. Intell. Transp. Syst., 20 (2019), 1126–1136. https://doi.org/10.1109/TITS.2018.2847291 doi: 10.1109/TITS.2018.2847291 |
[11] | W. Wang, J. Yang, M. Chen, P. Wang, A light CNN for end-to-end car license plates detection and recognition, IEEE Access, 7 (2019), 173875–173883. https://doi.org/10.1109/ACCESS.2019.2956357 doi: 10.1109/ACCESS.2019.2956357 |
[12] | Y. Deng, T. Zhang, G. Lou, X. Zheng, J. Jin, Q. Han, Deep learning-based autonomous driving systems: a survey of attacks and defenses, IEEE Trans. Ind. Inform., 17 (2021), 7897–7912. https://doi.org/10.1109/TII.2021.3071405 doi: 10.1109/TII.2021.3071405 |
[13] | S. Kuutti, R. Bowden, Y. Jin, P. Barber, S. Fallah, A survey of deep learning applications to autonomous vehicle control, IEEE Trans. Intell. Transp. Syst., 22 (2021), 712–733. https://doi.org/10.1109/TITS.2019.2962338 doi: 10.1109/TITS.2019.2962338 |
[14] | A. Krizhevsky, I. Sutskever, G. Hinton, ImageNet classification with deep convolutional neural networks, Commun. ACM, 60 (2017), 84–90. https://doi.org/10.1145/3065386 doi: 10.1145/3065386 |
[15] | I. Hammad, K. El-Sankary, Impact of approximate multipliers on VGG deep learning network, IEEE Access, 6 (2018), 60438–60444. https://doi.org/10.1109/ACCESS.2018.2875376 doi: 10.1109/ACCESS.2018.2875376 |
[16] | H. Zhu, M. Sun, H. Fu, N. Du, J. Zhang, Training a seismogram discriminator based on ResNet, IEEE Trans. Geosci. Remote Sensing, 59 (2021), 7076–7085. https://doi.org/10.1109/TGRS.2020.3030324 doi: 10.1109/TGRS.2020.3030324 |
[17] | Z. Ma, G. He, Y. Yuan, Fume hood window state recognition method based on few-shot deep learning, J. East China Unive. Sci. Technol., 46 (2020), 428–435. https://doi.org/10.14135/j.cnki.1006-3080.20190412004 doi: 10.14135/j.cnki.1006-3080.20190412004 |
[18] | C. Y. Hsu, Y. Qiao, C. Wang, S.T. Chen, Machine learning modeling for failure detection of elevator doors by three-dimensional video monitoring, IEEE Access, 8 (2020), 211595–211609. https://doi.org/10.1109/ACCESS.2020.3037185 doi: 10.1109/ACCESS.2020.3037185 |
[19] | D. Y. Choi, B. C. Song, Facial micro-expression recognition using two-dimensional landmark feature maps, IEEE Access, 8 (2020), 121549–121563. https://doi.org/10.1109/ACCESS.2020.3006958 doi: 10.1109/ACCESS.2020.3006958 |
[20] | Y. Ma, P. Tang, L. Zhao, Z. Zhang, Review of data augmentation for image in deep learning, J. Image Graphics, 26 (2021), 487–502. https://doi.org/10.11834/jig.200089 doi: 10.11834/jig.200089 |
[21] | Y. Zuo, Y. Shen, Regularization of the Tanh activation function, J. Zhoukou Normal Unive., 37 (2020), 23–27. https://doi.org/10.13450/j.cnki.jzknu.2020.05.006 doi: 10.13450/j.cnki.jzknu.2020.05.006 |
[22] | Q. Zeng, D. Tan, F. Wang, Improved convolutional neural network based on fast exponentially linear unit activation function, IEEE Access, 7 (2019), 151359–151367. https://doi.org/10.1109/ACCESS.2019.2948112 doi: 10.1109/ACCESS.2019.2948112 |
[23] | J. Tian, Y. Li, T. Li, Contrastive study of activation function in convolutional neural network, J. Syst. Appl., 27 (2018), 43–49. https://doi.org/10.15888/j.cnki.csa.006463 doi: 10.15888/j.cnki.csa.006463 |
[24] | S. G. Zadeh, M. Schmid, Bias in cross-entropy-based training of deep survival networks, IEEE Trans. Pattern Anal. Mach. Intell., 43 (2021), 3126–3137. https://doi.org/10.1109/TPAMI.2020.2979450 doi: 10.1109/TPAMI.2020.2979450 |
[25] | H. Fu, Y. Chi, Y. Liang, Guaranteed recovery of one-hidden-layer neural networks via cross entropy, IEEE Trans. Signal Process., 68 (2020), 3225–3235. https://doi.org/10.1109/TSP.2020.2993153 doi: 10.1109/TSP.2020.2993153 |
[26] | W. Liu, X. Liang, H. Qu, Learning performance of convolutional neural networks with different pooling models, J. Image Graphics, 21 (2016), 1178–1190. https://doi.org/10.11834/jig.20160907 doi: 10.11834/jig.20160907 |
[27] | M. Li, H. Li, J. Chen, Adam optimization algorithm based on differential privacy protection, Comput. Appl. softw., 21 (2020), 253–258. https://doi.org/10.3969/j.issn.1000-386x.2020.06.044 doi: 10.3969/j.issn.1000-386x.2020.06.044 |
[28] | G. Huang, Z. Liu, L. V. D. Maaten, K. Q. Weinberger, Densely connected convolutional networks, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 2261–2269. https://doi.org/10.1109/CVPR.2017.243 |