Research article Special Issues

Fatigue detection method for UAV remote pilot based on multi feature fusion

  • Received: 09 September 2022 Revised: 27 October 2022 Accepted: 01 November 2022 Published: 11 November 2022
  • In recent years, UAV industry is developing rapidly and vigorously. However, so far, there is no relevant research on the fatigue detection method for UAV remote pilot, which is the core technology to ensure the flight safety of UAV. Aiming at this problem, a fatigue detection method for UAV remote pilot is proposed in this paper. Specifically, we first build a UAV operator fatigue detection database (OFDD). By analyzing the fatigue features in the database, we find that multiple facial features are highly correlated to the fatigue state, especially the head posture, and the temporal information is essential for distinguish between yawn and speaking in the study of UAV remote pilot fatigue detection. Based on these findings, a fatigue detection method for UAV remote pilots was proposed by efficiently locating the related facial regions, a multiple features extraction module to extract the eye, mouth and head posture features, and an efficient temporal fatigue decision module based on SVM. The experimental results show that this method not only performs well on the traditional driver dataset, but also achieves an accuracy rate of 97.05%; and it achieves the highest detection accuracy rate of 97.32% on the UAV remote pilots fatigue detection dataset OFDD.

    Citation: Lei Pan, Chongyao Yan, Yuan Zheng, Qiang Fu, Yangjie Zhang, Zhiwei Lu, Zhiqing Zhao, Jun Tian. Fatigue detection method for UAV remote pilot based on multi feature fusion[J]. Electronic Research Archive, 2023, 31(1): 442-466. doi: 10.3934/era.2023022

    Related Papers:

  • In recent years, UAV industry is developing rapidly and vigorously. However, so far, there is no relevant research on the fatigue detection method for UAV remote pilot, which is the core technology to ensure the flight safety of UAV. Aiming at this problem, a fatigue detection method for UAV remote pilot is proposed in this paper. Specifically, we first build a UAV operator fatigue detection database (OFDD). By analyzing the fatigue features in the database, we find that multiple facial features are highly correlated to the fatigue state, especially the head posture, and the temporal information is essential for distinguish between yawn and speaking in the study of UAV remote pilot fatigue detection. Based on these findings, a fatigue detection method for UAV remote pilots was proposed by efficiently locating the related facial regions, a multiple features extraction module to extract the eye, mouth and head posture features, and an efficient temporal fatigue decision module based on SVM. The experimental results show that this method not only performs well on the traditional driver dataset, but also achieves an accuracy rate of 97.05%; and it achieves the highest detection accuracy rate of 97.32% on the UAV remote pilots fatigue detection dataset OFDD.



    加载中


    [1] Drones by the Numbers, Federal Aviation Administration, UAS quarterly activity reports, 2022. Available from: https://www.faa.gov/uas/resources/by_the_numbers.
    [2] Civil Aviation Administration of China, Annual report of Chinese civil aviation pilot development 2021, Dalian maritime university press, Dalian, China, 2022. Available from: https://pilot.caac.gov.cn/jsp/phone/airPhoneNewsDetail.jsp?uuid=abbc4b8e-1d42-4aaa-a0b4-6ab4ef2eb1df&code=Statistical_info#down.
    [3] N. Aung, P. Tewogbola, The impact of emotional labor on the health in the workplace: a narrative review of literature from 2013–2018, AIMS Public Health, 6 (2019), 268–275. https://doi.org/10.3934/publichealth.2019.3.268 doi: 10.3934/publichealth.2019.3.268
    [4] M. Hoang, E. Hillier, C. Conger, D. N. Gengler, C. W. Welty, C. Mayer, et al., Evaluation of call volume and negative emotions in emergency response system telecommunicators: a prospective, intensive longitudinal investigation, AIMS Public Health, 9 (2022), 403–414. https://doi.org/10.3934/publichealth.2022027
    [5] R. Parasuraman, D. R. Davies, Decision theory analysis of response latencies in vigilance, J. Exp. Psychol. Hum. Percept. Perform., 2 (1976), 578–590. https://doi.org/10.1037/0096-1523.2.4.578 doi: 10.1037/0096-1523.2.4.578
    [6] G. P. Krueger, Sustained military performance in continuous operations: combatant fatigue, rest and sleep needs, in Handbook of Military Psychology, John Wiley & Sons press, (1991), 255–277.
    [7] C. D. Wickens, W. S. Helton, J. G. Hollands, S. Banbury, Engineering Psychology and Human Performance, Routledge Press, New York, USA, 2021. https://doi.org/10.4324/9781003177616
    [8] A. P. Tvaryanas, Human factors considerations in migration of unmanned aircraft system (UAS) operator control, Defense Technical Information Center press, Brooks, USA, 2006.
    [9] X. Qiu, F. Tian, Q. Shi, Q. Zhao. B. Hu, Designing and application of wearable fatigue detection system based on multimodal physiological signals, in 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), IEEE, Seoul, Korea (South), (2020), 716–722. https://doi.org/10.1109/BIBM49941.2020.9313129
    [10] T. Iida, Y. Ito, M. Kanazashi, S. Murayama, T. Miyake, Y. Yoshimaru, et al., Effects of psychological and physical stress on oxidative stress, serotonin, and fatigue in young females induced by objective structured clinical examination: pilot study of u-8-OHdG, u-5HT, and s-HHV-6, Int. J. Tryptophan Res., 14 (2021). https://doi.org/10.1177/11786469211048443
    [11] X. Li, G. Li, L. Peng, L. Yan, C. Zhang, Driver fatigue detection based on speech feature transfer learning, J. China Railw. Soc., 42 (2020), 74–81.
    [12] G. Zhao, Y. He, H. Yang, Y. Tao, Research on fatigue detection based on visual features, IET Image Proc., 16 (2022), 1044–1053. https://doi.org/10.1049/ipr2.12207 doi: 10.1049/ipr2.12207
    [13] L. Wang, C. Zhang, X. Yin, R. Fu, H. Wang, A non-contact driving fatigue detection technique based on driver's physiological signals, Automot. Eng., 40 (2018), 333–341. https://doi.org/10.19562/j.chinasae.qcgc.2018.03.014 doi: 10.19562/j.chinasae.qcgc.2018.03.014
    [14] X. Li, G. Li, J. Shi, L. Peng, Fatigue driving detection based on speech psychoacoustic analysis, Chin. J. Sci. Instrum., 39 (2018), 166–175. https://doi.org/10.19650/j.cnki.cjsi.J1702539 doi: 10.19650/j.cnki.cjsi.J1702539
    [15] W. Zheng, K. Zheng, G. Li, W. Liu, C. Liu, J. Liu, et al., Vigilance estimation using a wearable EOG device in real driving environment, IEEE Trans. Intell. Transp. Syst., 21 (2020), 170–184. https://doi.org/10.1109/TITS.2018.2889962
    [16] X. Wang, C. Xu, Driver drowsiness detection based on non-intrusive metrics considering individual difference, Accid. Anal. Prev., 95 (2016), 350–357. https://doi.org/10.1016/j.aap.2015.09.002 doi: 10.1016/j.aap.2015.09.002
    [17] W. Feng, Y. Cao, X. Li, W. Hu, Face fatigue detection based on improved deep convolutional neural network, Comput. Intell. Neurosci., 20 (2020), 5680–5687.
    [18] Z. Li, F. Liu, W. Yang, S. Peng, J. Zhou, A survey of convolutional neural networks: analysis, applications, and prospects, IEEE Trans. Neural Networks Learn. Syst., 2021 (2021), 1–21. https://doi.org/10.1109/TNNLS.2021.3084827 doi: 10.1109/TNNLS.2021.3084827
    [19] D. Chen, F. Liu, Z. Li, Deep learning based single sample per person face recognition: A survey, preprint, arXiv: 2006.11395.
    [20] Y. Ed-doughmi, N. Idrissi, Driver fatigue detection using recurrent neural networks, in Proceedings of the 2nd International Conference on Networking, Information Systems & Security (NISS19), ACM, Rabat, Morocco, (2019), 1–6. https://doi.org/10.1145/3320326.3320376
    [21] C. Zheng, B. Xiaojuan, W. Yu, Fatigue driving detection based on Haar feature and extreme learning machine, J. China Univ. Posts Telecommun., 23 (2016), 91–100. https://doi.org/10.1016/S1005-8885(16)60050-X doi: 10.1016/S1005-8885(16)60050-X
    [22] R. Huang, Y. Wang, Z. Li, Z. Lei, Y. Xu, RF-DCM: multi-granularity deep convolutional model based on feature recalibration and fusion for driver fatigue detection, IEEE Trans. Intell. Transp. Syst., 23 (2022), 630–640.https://doi.org/10.1109/TITS.2020.3017513 doi: 10.1109/TITS.2020.3017513
    [23] W. Gu, Y. Zhu, X. Chen, L. He, B. Zheng, Hierarchical CNN-based real-time fatigue detection system by visual-based technologies using MSP model, IET Image Proc., 12 (2018), 2319–2329. https://doi.org/10.1049/iet-ipr.2018.5245 doi: 10.1049/iet-ipr.2018.5245
    [24] S. Dey, S. A. Chowdhury, S. Sultana, M. A. Hossain, M. Dey, S. K. Das, Real time driver fatigue detection based on facial behaviour along with machine learning approaches, in 2019 IEEE International Conference on Signal Processing, Information, Communication & Systems (SPICSCON), IEEE, Dhaka, Bangladesh, (2019), 135–140. https://doi.org/10.1109/SPICSCON48833.2019.9065120
    [25] Z. Xiao, Z. Hu, L. Geng, F. Zhang, J. Wu, Y. Li, Fatigue driving recognition network: fatigue driving recognition via convolutional neural network and long short-term memory units, IET Intel. Transport Syst., 13 (2019), 1410–1416. https://doi.org/10.1049/iet-its.2018.5392 doi: 10.1049/iet-its.2018.5392
    [26] W. Liu, J. Qian, Z. Yao, X. Jiao, J. Pan, Convolutional two-stream network using multi-facial feature fusion for driver fatigue detection, Future Internet, 11 (2019), 115. https://doi.org/10.3390/fi11050115 doi: 10.3390/fi11050115
    [27] L. Geng, X. Liang, Z. Xiao, Y. Li, Real-time driver fatigue detection based on morphology infrared features and deep learning, Infrared Laser Eng., 47 (2018), 203009. https://doi.org/10.3788/IRLA201847.0203009 doi: 10.3788/IRLA201847.0203009
    [28] F. Liu, D. Chen, J. Zhou, F. Xu, A review of driver fatigue detection and its advances on the use of RGB-D camera and deep learning, Eng. Appl. Artif. Intell., 116 (2022), 105399. https://doi.org/10.1016/j.engappai.2022.105399 doi: 10.1016/j.engappai.2022.105399
    [29] S. Abtahi, M. Omidyeganeh, S. Shirmohammadi, B. Hariri, YawDD: A yawning detection dataset, in Proceedings of the 5th ACM Multimedia Systems Conference, ACM, Singapore, (2014), 24–28. https://doi.org/10.1145/2557642.2563678
    [30] K. Diaz-Chito, A. Hernández-Sabaté, A. M. López, A reduced feature set for driver head pose estimation, Appl. Soft Comput., 45 (2016), 98–107. https://doi.org/10.1016/j.asoc.2016.04.027
    [31] F. Song, X. Tan, X. Liu, S. Chen, Eyes closeness detection from still images with multi-scale histograms of principal oriented gradients, Pattern Recognit., 47 (2014), 2825–2838. https://doi.org/10.1016/j.patcog.2014.03.024 doi: 10.1016/j.patcog.2014.03.024
    [32] C. Weng, Y. Lai, S. Lai, Driver drowsiness detection via a hierarchical temporal deep belief network, in Asian Conference on Computer Vision (ACCV), Springer Cham, (2016), 117–133. https://doi.org/10.1007/978-3-319-54526-4_9
    [33] R. Gross, Face databases, in Handbook of Face Recognition, Springer, (2005), 301–327. https://doi.org/10.1007/0-387-27257-7_14
    [34] G. Jocher, A. Stoken, J. Borovec, A. Chaurasia, T. Xie, C. Y. Liu, Ultralytics/yolov5: v5.0-yolov5-p6 1280 models, Zenodo, 2021 (2021). https://doi.org/10.5281/zenodo.4679653
    [35] J. F. Cohn, A. J. Zlochower, J. Lien, T. Kanade, Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding, Psychophysiology, 36 (1999), 35–43. https://doi.org/10.1017/s0048577299971184 doi: 10.1017/s0048577299971184
    [36] S. Dubuisson, F. Davoine, M. Masson, A solution for facial expression representation and recognition, Signal Process. Image Commun., 17 (2002), 657–673. https://doi.org/10.1016/S0923-5965(02)00076-0 doi: 10.1016/S0923-5965(02)00076-0
    [37] N. A. Rahman, K. C. Wei, J. See, RGB-H-CbCr skin colour model for human face detection, Fac. Inf. Technol., 4 (2007), 1–6.
    [38] M. Sun, L. Liang, H. Wang, W. He, L. Zhao, Facial landmark detection based on cascade convolutional neural network, J. Univ. Chin. Acad. Sci., 37 (2020), 562–569. https://doi.org/10.7523/j.issn.2095-6134.2020.04.017 doi: 10.7523/j.issn.2095-6134.2020.04.017
    [39] X. Guo, S. Li, J. Yu, J. Zhang, J. Ma, L. Ma, et al., PFLD: A practical facial landmark detector, preprint, arXiv: 1902.10859.
    [40] P. Yan, D. Yan, C. Du, Design and implementation of a driver's eye state recognition algorithm based on PERCLOS, Chin. J. Electron., 4 (2014), 669–672.
    [41] V. F. Ferrario, C. Sforza, G. Serrao, G. Grassi, E. Mossi, Active range of motion of the head and cervical spine: a three-dimensional investigation in healthy young adults, J. Orthop. Res., 20 (2002), 122–129. https://doi.org/10.1016/S0736-0266(01)00079-1 doi: 10.1016/S0736-0266(01)00079-1
    [42] H. Wang, Z. Li, X. Ji, Y. Wang, Face R-CNN, preprint, arXiv: 1706.01061.
    [43] B. Yan, C. Yang, F. Chen, K. Takeda, C. Wang, FDNet: a deep learning approach with two parallel cross encoding pathways for precipitation nowcasting, J. Comput. Sci. Technol., 1 (2021). https://jcst.ict.ac.cn/EN/10.1007/s11390-021-1103-8
    [44] J. Li, Y. Wang, C. Wang, Y. Tai, J. Qian, J. Yang, et al., DSFD: dual shot face detector, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Long Beach, USA, (2019), 5060–5069. https://doi.org/10.1109/CVPR.2019.00520
    [45] J. Deng, J. Guo, E. Ververas, I. Kotsia, S. Zafeiriou, RetinaFace: single-shot multi-level face localisation in the wild, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Seattle, USA, (2020), 5203–5212. https://doi.org/10.1109/cvpr42600.2020.00525
    [46] W. Tian, Z. Wang, H. Shen, W. Deng, Y. Meng, B. Chen, et al., Learning better features for face detection with feature fusion and segmentation supervision, preprint, arXiv: 1811.08557.
    [47] S. Zhang, X. Zhu, Z. Lei, H. Shi, X. Wang, S. Z. Li, S3fd: single shot scale-invariant face detector, in Proceedings of the IEEE International Conference on Computer Vision (ICCV), IEEE, Venice, Italy, (2017), 192–201. https://doi.org/10.1109/iccv.2017.30
    [48] J. Wang, Y. Yuan, G. Yu, Face attention network: an effective face detector for the occluded faces, preprint, arXiv: 1711.07246.
    [49] H. Zhang, C. Wu, Z. Zhang, Y. Zhu, H. Lin, Z. Zhang, et al., ResNeSt: Split-attention networks, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, New Orleans, USA, (2022), 2736–2746. https://doi.org/10.1109/CVPRW56347.2022.00309
    [50] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L. Chen, Mobilenetv2: Inverted residuals and linear bottlenecks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Salt Lake, USA, (2018), 4510–4520. https://doi.org/10.1109/CVPR.2018.00474
    [51] K. Sun, Y. Zhao, B. Jiang, T. Cheng, B. Xiao, D. Liu, et al., High-resolution representations for labeling pixels and regions, preprint, arXiv: 1904.04514.
    [52] M. Tan, Q. Le, Efficientnet: Rethinking model scaling for convolutional neural networks, in International Conference on Machine Learning (ICML), PMLR, Long Beach, USA, (2019), 6105–6114.
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(11621) PDF downloads(148) Cited by(0)

Article outline

Figures and Tables

Figures(10)  /  Tables(7)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog