Research article

Cross-subject EEG-based emotion recognition through dynamic optimization of random forest with sparrow search algorithm


  • Received: 18 December 2023 Revised: 03 February 2024 Accepted: 18 January 2024 Published: 29 February 2024
  • The objective of EEG-based emotion recognition is to classify emotions by decoding signals, with potential applications in the fields of artificial intelligence and bioinformatics. Cross-subject emotion recognition is more difficult than intra-subject emotion recognition. The poor adaptability of classification model parameters is a significant factor of low accuracy in cross-subject emotion recognition. We propose a model of a dynamically optimized Random Forest based on the Sparrow Search Algorithm (SSA-RF). The decision trees number (DTN) and the leave minimum number (LMN) of the RF are dynamically optimized by the SSA. 12 features are used to construct feature combinations for selecting the optimal feature combination. DEAP and SEED datasets are employed for testing the performance of SSA-RF. The experimental results show that the accuracy of binary classification is 76.81% on DEAP, and the accuracy of triple classification is 75.96% on SEED based on SSA-RF, which are both higher than that of traditional RF. This study provides new insights for the development of cross-subject emotion recognition, and has significant theoretical value.

    Citation: Xiaodan Zhang, Shuyi Wang, Kemeng Xu, Rui Zhao, Yichong She. Cross-subject EEG-based emotion recognition through dynamic optimization of random forest with sparrow search algorithm[J]. Mathematical Biosciences and Engineering, 2024, 21(3): 4779-4800. doi: 10.3934/mbe.2024210

    Related Papers:

  • The objective of EEG-based emotion recognition is to classify emotions by decoding signals, with potential applications in the fields of artificial intelligence and bioinformatics. Cross-subject emotion recognition is more difficult than intra-subject emotion recognition. The poor adaptability of classification model parameters is a significant factor of low accuracy in cross-subject emotion recognition. We propose a model of a dynamically optimized Random Forest based on the Sparrow Search Algorithm (SSA-RF). The decision trees number (DTN) and the leave minimum number (LMN) of the RF are dynamically optimized by the SSA. 12 features are used to construct feature combinations for selecting the optimal feature combination. DEAP and SEED datasets are employed for testing the performance of SSA-RF. The experimental results show that the accuracy of binary classification is 76.81% on DEAP, and the accuracy of triple classification is 75.96% on SEED based on SSA-RF, which are both higher than that of traditional RF. This study provides new insights for the development of cross-subject emotion recognition, and has significant theoretical value.



    加载中


    [1] S. M. Alarcao, M. J. Fonseca, Emotions recognition using EEG signals: A survey, IEEE Trans. Affective Comput., 10 (2017), 374–393. https://doi.org/10.1109/WiSPNET.2017.8299778 doi: 10.1109/WiSPNET.2017.8299778
    [2] L. Piho, T. Tjahjadi, A mutual information based adaptive windowing of informative EEG for emotion recognition, IEEE Trans. Affective Comput., 11 (2018), 722–735. https://doi.org/10.1109/TAFFC.2018.2840973 doi: 10.1109/TAFFC.2018.2840973
    [3] M. Kim, M. Kim, E. Oh, S. Kim, A review on the computational methods for emotional state estimation from the human EEG, Comput. Math. Methods Med., 2013 (2013). https://doi.org/10.1155/2013/573734 doi: 10.1155/2013/573734
    [4] R. H. Aljuhani, A. Alshutayri, S. Alahdal, Arabic speech emotion recognition from Saudi dialect corpus, IEEE Access, 9 (2021), 127081–127085. https://doi.org/10.1109/ACCESS.2021.3110992 doi: 10.1109/ACCESS.2021.3110992
    [5] Y. Liu, G. Fu, Emotion recognition by deeply learned multi-channel textual and EEG features, Future Gener. Comput. Syst., 119 (2021), 1–6. https://doi.org/10.1016/j.future.2021.01.010 doi: 10.1016/j.future.2021.01.010
    [6] M. G. Salido Ortega, L. F. Rodríguez, J. O. Gutierrez-Garcia, Towards emotion recognition from contextual information using machine learning, J. Ambient Intell. Hum. Comput., 11 (2020), 3187–3207. https://doi.org/10.1007/s12652-019-01485-x doi: 10.1007/s12652-019-01485-x
    [7] R. Karbauskaitė, L. Sakalauskas, G. Dzemyda, Kriging predictor for facial emotion recognition using numerical proximities of human emotions, Informatica, 31 (2020), 249–275. https://doi.org/10.15388/20-INFOR419 doi: 10.15388/20-INFOR419
    [8] B. Xie, M. Sidulova, C. H. Park, Robust multimodal emotion recognition from conversation with transformer-based crossmodality fusion, Sensors, 21 (2021), 4913. https://doi.org/10.3390/s21144913 doi: 10.3390/s21144913
    [9] Y. Li, B. Fu, F. Li, W. Zheng, A novel transferability attention neural network model for EEG emotion recognition, Neurocomputing, 447 (2021), 92–101. https://doi.org/10.1016/j.neucom.2021.02.048 doi: 10.1016/j.neucom.2021.02.048
    [10] H. Jiang, Z. Wang, R. Jiao, S. Jiang, Picture-induced EEG Signal classification based on CVC emotion recognition system, Comput. Mater. Continua, 65 (2020), 1453–1465. https://doi.org/10.32604/cmc.2020.011793 doi: 10.32604/cmc.2020.011793
    [11] S. Zhang, C. Li, Research on feature fusion speech emotion recognition technology for smart teaching, Mobile Inf. Syst., 2022 (2022). https://doi.org/10.1155/2022/7785929 doi: 10.1155/2022/7785929
    [12] Q. Liu, H. Liu, Criminal psychological emotion recognition based on deep learning and EEG signals, Neural Comput. Appl., 33 (2021), 433–447. https://doi.org/10.1007/s00521-020-05024-0 doi: 10.1007/s00521-020-05024-0
    [13] X. Liu, S. Li, M. Wang, Hierarchical attention-based multimodal fusion network for video emotion recognition, Comput. Intell. Neurosci., 2021 (2021). https://doi.org/10.1155/2021/5585041 doi: 10.1155/2021/5585041
    [14] J. Quan, Y. Miyake, T. Nozawa, Incorporating interpersonal synchronization features for automatic emotion recognition from visual and audio data during communication, Sensors, 21 (2021), 5317. https://doi.org/10.3390/s21165317 doi: 10.3390/s21165317
    [15] Y. Fang, H. Yang, X. Zhang, H. Liu, B. Tao, Multi-feature input deep forest for EEG-based emotion recognition, Front. Neurorob., 14 (2021), 617531. https://doi.org/10.3389/fnbot.2020.617531 doi: 10.3389/fnbot.2020.617531
    [16] E. Anzai, D. Ren, L. Cazenille, N. Aubert-Kato, J. Tripette, Y. Ohta, Correction: Random forest algorithms to classify frailty and falling history in seniors using plantar pressure measurement insoles: a large-scale feasibility study, BMC Geriatr., 22 (2022), 946. https://doi.org/10.1186/s12877-022-03425-5 doi: 10.1186/s12877-022-03425-5
    [17] T. Zhang, J. Su, Z. Xu, Y. Luo, J. Li, Sentinel-2 satellite imagery for urban land cover classification by optimized random forest classifier, Appl. Sci., 11 (2021), 543. https://doi.org/10.3390/app11020543 doi: 10.3390/app11020543
    [18] G. Beni, J. Wang, Swarm intelligence, in Proceedings for the 7th Annual Meeting of the Robotics Society of Japan, (1989), 425–428.
    [19] X. Ye, L. A. Dong, D. Ma, Loan evaluation in P2P lending based on random forest optimized by genetic algorithm with profit score, Electron. Commerce Res. Appl., 32 (2018), 23–36. https://doi.org/10.1016/j.elerap.2018.10.004 doi: 10.1016/j.elerap.2018.10.004
    [20] J. K. Xue, B. Shen, A novel swarm intelligence optimization approach: sparrow search algorithm, Syst. Sci. Control Eng., 8 (2020), 22–34. https://doi.org/10.1080/21642583.2019.1708830 doi: 10.1080/21642583.2019.1708830
    [21] W. Wang, F. Qi, D. P. Wipf, C. Cai, T. Yu, Y. Li, et al., Sparse Bayesian learning for end-to-end EEG decoding, IEEE Trans. Pattern Anal. Mach. Intell., 45 (2023), 15632–15649. https://doi.org/10.1109/TPAMI.2023.3299568 doi: 10.1109/TPAMI.2023.3299568
    [22] X. Zhang, L. Yao, X. Wang, J. Monaghan, D. Mcalpine, Y. Zhang, A survey on deep learning based brain computer interface: Recent advances and new frontiers, preprint, arXiv: 1905.04149. https://doi.org/10.48550/arXiv.1905.04149
    [23] M. Z. Soroush, K. Maghooli, S. K. Setarehdan, A. M. Nasrabadi, Emotion recognition through EEG phase space dynamics and Dempster-Shafer theory, Med. Hypotheses, 127 (2019), 34–45. https://doi.org/10.1016/j.mehy.2019.03.025 doi: 10.1016/j.mehy.2019.03.025
    [24] H. Chao, L. Dong, Y. Liu, B. Lu, Emotion recognition from multiband EEG signals using CapsNet, Sensors, 19 (2019), 2212. https://doi.org/10.3390/s19092212 doi: 10.3390/s19092212
    [25] M. Shahbakhti, M. Beiramvand, I. Rejer, P. Augustyniak, A. Broniec-Wójcik, M.Wierzchon, et al., Simultaneous eye blink characterization and elimination from low-channel prefrontal EEG signals enhances driver drowsiness detection, IEEE J. Biomed. Health. Inf., 26 (2021), 1001–1012. https://doi.org/10.1109/JBHI.2021.3096984 doi: 10.1109/JBHI.2021.3096984
    [26] S. Koelstra, C. Muhl, M. Soleymani, J. Lee, A. Yazdani, T. Ebrahimi, et al., DEAP: A database for emotion analysis; using physiological signals, IEEE Trans. Affective Comput., 3 (2012), 18–31. https://doi.org/10.1109/T-AFFC.2011.15 doi: 10.1109/T-AFFC.2011.15
    [27] Y. Liu, Y. Ding, C. Li, J. Cheng, R. Song, F. Wan, et al., Multi-channel EEG-based emotion recognition via a multi-level features guided capsule network, Comput. Biol. Med., 123 (2020), 103927. https://doi.org/10.1016/j.compbiomed.2020.103927 doi: 10.1016/j.compbiomed.2020.103927
    [28] P. Arnau-Gonzalez, M. Arevalillo-Herraez, N. Ramzan, Fusing highly dimensional energy and connectivity features to identify affective states from EEG signals, Neurocomputing, 244 (2017), 81–89. https://doi.org/10.1016/j.neucom.2017.03.027 doi: 10.1016/j.neucom.2017.03.027
    [29] X. Li, D. Song, P. Zhang, Y. Zhang, Y. Hou, B. Hu, Exploring EEG features in cross-subject emotion recognition, Front. Neurosci., 12 (2018), 162. https://doi.org/10.3389/fnins.2018.00162 doi: 10.3389/fnins.2018.00162
    [30] P. Pandey, K. R. Seeja, Subject independent emotion recognition from EEG using VMD and deep learning, J. King Saud Univ. Comput. Inf. Sci., 34 (2022), 1730–1738. https://doi.org/10.1016/j.jksuci.2019.11.003 doi: 10.1016/j.jksuci.2019.11.003
    [31] Y. Cimtay, E. Ekmekcioglu, Investigating the use of pretrained convolutional neural network on cross-subject and cross-dataset EEG emotion recognition, Sensors (Basel), 20 (2020), 2034. https://doi.org/10.3390/s20072034 doi: 10.3390/s20072034
    [32] A. Mert, H. H. Celik, Emotion recognition using time–frequency ridges of EEG signals based on multivariate synchrosqueezing transform, Biomed. Eng./Biomed. Tech., 66 (2021), 345–352. https://doi.org/10.1515/bmt-2020-0295 doi: 10.1515/bmt-2020-0295
    [33] G. Xu, W. Guo, Y. Wang, Subject-independent EEG emotion recognition with hybrid spatio-temporal GRU-Conv architecture, Med. Biol. Eng. Comput., 61 (2023), 61–73. https://doi.org/10.1007/s11517-022-02686-x doi: 10.1007/s11517-022-02686-x
    [34] Q. She, X. Shi, F. Fang, Y. Ma, Y. Zhang, Cross-subject EEG emotion recognition using multi-source domain manifold feature selection, Comput. Biol. Med., 159 (2023), 106860. https://doi.org/10.1016/j.compbiomed.2023.106860 doi: 10.1016/j.compbiomed.2023.106860
    [35] Z. Lan, O. Sourina, L. Wang, R. Scherer, G. R. Müller-Putz, Domain adaptation techniques for EEG-based emotion recognition: A comparative study on two public datasets, IEEE Trans. Cognit. Dev. Syst., 11 (2018), 85–94. https://doi.org/10.1109/TCDS.2018.2826840 doi: 10.1109/TCDS.2018.2826840
    [36] V. Gupta, M. D. Chopda, R. B. Pachori, Cross-subject emotion recognition using flexible analytic wavelet transform from EEG signals, IEEE Sensors J., 19 (2018), 2266–2274. https://doi.org/10.1109/JSEN.2018.2883497 doi: 10.1109/JSEN.2018.2883497
    [37] Y. Luo, L. Z. Zhu, Z. Y. Wan, B. L. Lu, Data augmentation for enhancing EEG-based emotion recognition with deep generative models, J. Neural Eng., 17 (2020), 056021. https://doi.org/10.1088/1741-2552/abb580 doi: 10.1088/1741-2552/abb580
    [38] A. Topic, M. Russo, Emotion recognition based on EEG feature maps through deep learning network, Eng. Sci. Technol., 24 (2021), 1442–1454. https://doi.org/10.1016/j.jestch.2021.03.012 doi: 10.1016/j.jestch.2021.03.012
    [39] T. Emsawas, T. Morita, T. Kimura, K. Fukui, M. Numao, Multi-kernel temporal and spatial convolution for EEG-based emotion classification, Sensors, 22 (2022), 8250. https://doi.org/10.3390/s22218250 doi: 10.3390/s22218250
    [40] Y. Zhang, Y. Peng, J. Li, W. Kong, SIFIAE: An adaptive emotion recognition model with EEG feature-label inconsistency consideration, J. Neurosci. Methods, 395 (2023), 109909. https://doi.org/10.1016/j.jneumeth.2023.109909 doi: 10.1016/j.jneumeth.2023.109909
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(739) PDF downloads(44) Cited by(0)

Article outline

Figures and Tables

Figures(8)  /  Tables(11)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog