Research article

Shielding facial physiological information in video


  • Received: 21 January 2022 Revised: 08 March 2022 Accepted: 09 March 2022 Published: 21 March 2022
  • With the recent development of non-contact physiological signal detection methods based on videos, it is possible to obtain the physiological parameters through the ordinary video only, such as heart rate and its variability of an individual. Therefore, personal physiological information may be leaked unknowingly with the spread of videos, which may cause privacy or security problems. In this paper a new method is proposed, which can shield physiological information in the video without reducing the video quality significantly. Firstly, the principle of the most widely used physiological signal detection algorithm: remote photoplethysmography (rPPG) was analyzed. Then the region of interest (ROI) of face contain physiological information with high signal to noise ratio was selected. Two physiological information forgery operation: single-channel periodic noise addition with blur filtering and brightness fine-tuning are conducted on the ROIs. Finally, the processed ROI images are merged into video frames to obtain the processed video. Experiments were performed on the VIPL-HR video dataset. The interference efficiencies of the proposed method on two mainly used rPPG methods: Independent Component Analysis (ICA) and Chrominance-based Method (CHROM) are 82.9 % and 84.6 % respectively, which demonstrated the effectiveness of the proposed method.

    Citation: Kun Zheng, Junjie Shen, Guangmin Sun, Hui Li, Yu Li. Shielding facial physiological information in video[J]. Mathematical Biosciences and Engineering, 2022, 19(5): 5153-5168. doi: 10.3934/mbe.2022241

    Related Papers:

  • With the recent development of non-contact physiological signal detection methods based on videos, it is possible to obtain the physiological parameters through the ordinary video only, such as heart rate and its variability of an individual. Therefore, personal physiological information may be leaked unknowingly with the spread of videos, which may cause privacy or security problems. In this paper a new method is proposed, which can shield physiological information in the video without reducing the video quality significantly. Firstly, the principle of the most widely used physiological signal detection algorithm: remote photoplethysmography (rPPG) was analyzed. Then the region of interest (ROI) of face contain physiological information with high signal to noise ratio was selected. Two physiological information forgery operation: single-channel periodic noise addition with blur filtering and brightness fine-tuning are conducted on the ROIs. Finally, the processed ROI images are merged into video frames to obtain the processed video. Experiments were performed on the VIPL-HR video dataset. The interference efficiencies of the proposed method on two mainly used rPPG methods: Independent Component Analysis (ICA) and Chrominance-based Method (CHROM) are 82.9 % and 84.6 % respectively, which demonstrated the effectiveness of the proposed method.



    加载中


    [1] P. Garrido, L. Valgaerts, O. Rehmsen, T. Thormaehlen, P. Perez, C. Theobalt, Automatic face reenactment, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2014), 4217-4224. https://doi.org/10.1109/CVPR.2014.537
    [2] S. Shan, E. Wenger, J. Zhang, H. Li, H. Zheng, B. Y. Zhao, Fawkes: Protecting privacy against unauthorized deep learning models, in Proceedings of 29th USENIX Security Symposium, (2020), 1589-1604. https://dblp.org/rec/conf/uss/ShanWZLZZ20
    [3] Y. Nirkin, Y. Keller, T. Hassner, Fsgan: Subject agnostic face swapping and reenactment, in Proceedings of the IEEE/CVF international conference on computer vision, (2019), 7184-7193. https://doi.org/10.1109/ICCV.2019.00728
    [4] G. Antipov, M. Baccouche, J. L. Dugelay, Face aging with conditional generative adversarial networks, in Proceedings of IEEE international conference on image processing, (2017), 2089-2093. https://doi.org/10.1109/ICIP.2017.8296650
    [5] R. Huang, S. Zhang, T. Li, R. He, Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis, in Proceedings of the IEEE international conference on computer vision, (2017), 2439-2448. https://doi.org/10.1109/ICCV.2017.267
    [6] U. A. Ciftci, I. Demir, L. Yin, Fakecatcher: Detection of synthetic portrait videos using biological signals, IEEE Trans. Pattern Anal. Mach. Intell., 2020. https://doi.org/10.1109/TPAMI.2020.3009287 doi: 10.1109/TPAMI.2020.3009287
    [7] S. Fernandes, S. Raj, E. Ortiz, I. Vintila, M. Salter, G. Urosevic, et al., Predicting heart rate variations of deepfake videos using neural ode, in Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, (2019), 1721-1729. https://doi.org/10.1109/ICCVW.2019.00213
    [8] H. Ghayvat, M. Awais, S. Pandya, H. Ren, S. Akbarzadeh, S. C. Mukhopadhyay, et al., Smart aging system: uncovering the hidden wellness parameter for well-being monitoring and anomaly detection, Sensors, 19 (2019), 766. https://doi.org/10.3390/s19040766 doi: 10.3390/s19040766
    [9] C. I. Patel, D. Labana, S. Pandya, K. Modi, H. Ghayvat, M. Awais, Histogram of oriented gradient-based fusion of features for human action recognition in action video sequences, Sensors, 20 (2020), 7299. https://doi.org/10.3390/s20247299 doi: 10.3390/s20247299
    [10] M. Z. Poh, D. J. McDuff, R. W. Picard, Non-contact, automated cardiac pulse measurements using video imaging and blind source separation, Opt. Express, 18 (2010), 10762-10774. https://doi.org/10.1364/OE.18.010762 doi: 10.1364/OE.18.010762
    [11] M. Z. Poh, D. J. McDuff, R. W. Picard, Advancements in noncontact, multiparameter physiological measurements using a webcam, IEEE Trans. Biomed. Eng., 58 (2011), 7-11. https://doi.org/10.1109/TBME.2010.2086456 doi: 10.1109/TBME.2010.2086456
    [12] D. H. Gerard, V. Jeanne, Robust pulse rate from chrominance-based rPPG, IEEE Trans. Biomed. Eng., 60 (2013), 2878-2886. https://doi.org/10.1109/TBME.2013.2266196 doi: 10.1109/TBME.2013.2266196
    [13] S. K. A. Prakash, C. Tucker, Bounded Kalman filter method for motion-robust, non-contact heart rate estimation, Biomed. Opt. Express, 9 (2018), 873-897. https://doi.org/10.1364/BOE.9.000873 doi: 10.1364/BOE.9.000873
    [14] Z. Yang, X. Yang, J. Jin, X. Wu, Motion-resistant heart rate measurement from face videos using patch-based fusion, Signal Image Video Process., 3 (2019), 423-430. https://doi.org/10.1007/s11760-018-01409-w doi: 10.1007/s11760-018-01409-w
    [15] Y. Qiu, Y. Liu, J. Arteaga-Falconi, H. Dong, A. E. Saddik, EVM-CNN: Real-time contactless heart rate estimation from facial video, IEEE Trans. Multimedia, 21 (2018), 1778-1787. https://doi.org/10.1109/TMM.2018.2883866 doi: 10.1109/TMM.2018.2883866
    [16] X. Niu, S. Shan, H. Han, X. Chen, Rhythmnet: End-to-end heart rate estimation from face via spatial-temporal representation, IEEE Trans. Image Process., 29 (2020), 2409-2423. https://doi.org/10.1109/TIP.2019.2947204 doi: 10.1109/TIP.2019.2947204
    [17] K. Zheng, K. Ci, J. Cui, J. Kong, J. Zhou, Non-contact heart rate detection when face information is missing during online learning, Sensors, 20 (2020), 7021. https://doi.org/10.3390/s20247021 doi: 10.3390/s20247021
    [18] D. Garg, P. Goel, S. Pandya, A. Ganatra, K. Kotecha, A Deep learning approach for face detection using YOLO, in Proceedings of the IEEE Punecon, (2018), 1-4, https://doi.org/10.1109/PUNECON.2018.8745376
    [19] V. Kazemi, J. Sullivan, One millisecond face alignment with an ensemble of regression trees, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2014), 1867-1874. http://doi.org/10.1109/CVPR.2014.241
    [20] G. Heusch, A. Anjos, S. Marcel, A reproducible study on remote heart rate measurement, preprint, arXiv: 1709.00962.
    [21] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE Trans. Image Process., 13 (2004), 600-612. https://doi.org/10.1109/TIP.2003.819861 doi: 10.1109/TIP.2003.819861
    [22] A. Mittal, R. Soundararajan, A. C. Bovik, Making a completely blind image quality analyzer, IEEE Signal Process. Lett., 20 (2012), 209-212. https://doi.org/10.1109/LSP.2012.2227726 doi: 10.1109/LSP.2012.2227726
    [23] G. S. Hsu, A. Ambikapathi, M. S. Chen, Deep learning with time-frequency representation for pulse estimation from facial videos, in Proceedings of IEEE International Joint Conference on Biometrics, (2017), 383-389. https://doi.org/10.1109/BTAS.2017.8272721
    [24] R. Stricker, S. Müller, H. M. Gross, Non-contact video-based pulse rate measurement on a mobile service robot, in Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, (2014), 1056-1062. https://doi.org/10.1109/ROMAN.2014.6926392
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1440) PDF downloads(58) Cited by(2)

Article outline

Figures and Tables

Figures(16)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog