Motor imagery (MI) brain-computer interface (BCI) assist users in establishing direct communication between their brain and external devices by decoding the movement intention of human electroencephalogram (EEG) signals. However, cerebral cortical potentials are highly rhythmic and sub-band features, different experimental situations and subjects have different categories of semantic information in specific sample target spaces. Feature fusion can lead to more discriminative features, but simple fusion of features from different embedding spaces leading to the model global loss is not easily convergent and ignores the complementarity of features. Considering the similarity and category contribution of different sub-band features, we propose a multi-band centroid contrastive reconstruction fusion network (MB-CCRF). We obtain multi-band spatio-temporal features by frequency division, preserving the task-related rhythmic features of different EEG signals; use a multi-stream cross-layer connected convolutional network to perform a deep feature representation for each sub-band separately; propose a centroid contrastive reconstruction fusion module, which maps different sub-band and category features into the same shared embedding space by comparing with category prototypes, reconstructing the feature semantic structure to ensure that the global loss of the fused features converges more easily. Finally, we use a learning mechanism to model the similarity between channel features and use it as the weight of fused sub-band features, thus enhancing the more discriminative features, suppressing the useless features. The experimental accuracy is 79.96% in the BCI competition Ⅳ-Ⅱa dataset. Moreover, the classification effect of sub-band features of different subjects is verified by comparison tests, the category propensity of different sub-band features is verified by confusion matrix tests and the distribution in different classes of each sub-band feature and fused feature are showed by visual analysis, revealing the importance of different sub-band features for the EEG-based MI classification task.
Citation: Jiacan Xu, Donglin Li, Peng Zhou, Chunsheng Li, Zinan Wang, Shenghao Tong. A multi-band centroid contrastive reconstruction fusion network for motor imagery electroencephalogram signal decoding[J]. Mathematical Biosciences and Engineering, 2023, 20(12): 20624-20647. doi: 10.3934/mbe.2023912
Motor imagery (MI) brain-computer interface (BCI) assist users in establishing direct communication between their brain and external devices by decoding the movement intention of human electroencephalogram (EEG) signals. However, cerebral cortical potentials are highly rhythmic and sub-band features, different experimental situations and subjects have different categories of semantic information in specific sample target spaces. Feature fusion can lead to more discriminative features, but simple fusion of features from different embedding spaces leading to the model global loss is not easily convergent and ignores the complementarity of features. Considering the similarity and category contribution of different sub-band features, we propose a multi-band centroid contrastive reconstruction fusion network (MB-CCRF). We obtain multi-band spatio-temporal features by frequency division, preserving the task-related rhythmic features of different EEG signals; use a multi-stream cross-layer connected convolutional network to perform a deep feature representation for each sub-band separately; propose a centroid contrastive reconstruction fusion module, which maps different sub-band and category features into the same shared embedding space by comparing with category prototypes, reconstructing the feature semantic structure to ensure that the global loss of the fused features converges more easily. Finally, we use a learning mechanism to model the similarity between channel features and use it as the weight of fused sub-band features, thus enhancing the more discriminative features, suppressing the useless features. The experimental accuracy is 79.96% in the BCI competition Ⅳ-Ⅱa dataset. Moreover, the classification effect of sub-band features of different subjects is verified by comparison tests, the category propensity of different sub-band features is verified by confusion matrix tests and the distribution in different classes of each sub-band feature and fused feature are showed by visual analysis, revealing the importance of different sub-band features for the EEG-based MI classification task.
[1] | J. R. Wolpaw, N. Birbaumer, W. J. Heetderks, D. J. McFarland, P. H. Peckham, G. Schalk, et al., Brain-computer interface technology: A review of the first international meeting, IEEE Trans. Neural Syst. Rehabil. Eng., 8 (2000), 164–173. https://doi.org/10.1109/TRE.2000.847807 doi: 10.1109/TRE.2000.847807 |
[2] | J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, T. M. Vaughan, Brain–computer interfaces for communication and control, Clin. Neurophysiol., 113 (2002), 767–791. https://doi.org/10.1016/S1388-2457(02)00057-3 doi: 10.1016/S1388-2457(02)00057-3 |
[3] | V. Mihajlović, B. Grundlehner, R. Vullers, J. Penders, Wearable, wireless EEG solutions in daily life applications: What are we missing, IEEE J. Biomed. Health Inf., 19 (2015), 6–21. https://doi.org/10.1109/JBHI.2014.2328317 doi: 10.1109/JBHI.2014.2328317 |
[4] | Y. Jiao, Y. Zhang, X. Chen, E. Yin, J. Jin, X. Wang, et al., Sparse group representation model for motor imagery EEG classification, IEEE J. Biomed. Health Inf., 23 (2018), 631–641. https://doi.org/10.1109/JBHI.2018.2832538 doi: 10.1109/JBHI.2018.2832538 |
[5] | T. D. Pham, Classification of motor-imagery tasks using a large EEG dataset by fusing classifiers learning on wavelet-scattering features, IEEE Trans. Neural Syst. Rehabil. Eng., 31 (2023), 1097–1107. https://doi.org/10.1109/TNSRE.2023.3241241 doi: 10.1109/TNSRE.2023.3241241 |
[6] | W. Y. Hsu, Y. W. Cheng, EEG-Channel-Temporal-Spectral-Attention correlation for motor imagery EEG classification, IEEE Trans. Neural Syst. Rehabil. Eng., 31 (2023), 1659–1669. https://doi.org/10.1109/TNSRE.2023.3255233 doi: 10.1109/TNSRE.2023.3255233 |
[7] | C. Liu, J. Jin, I Daly, S Li, H. Sun, Y. Huang, et al., SincNet-based hybrid neural network for motor imagery EEG decoding, IEEE Trans. Neural Syst. Rehabil. Eng., 30 (2022), 540–549. https://doi.org/10.1109/TNSRE.2022.3156076 doi: 10.1109/TNSRE.2022.3156076 |
[8] | X. Yin, M. Meng, Q. She, Y. Gao, Z. Luo, Optimal channel-based sparse time-frequency blocks common spatial pattern feature extraction method for motor imagery classification, Math. Biosci. Eng., 18 (2021), 4247–4263. https://doi.org/10.3934/mbe.2021213 doi: 10.3934/mbe.2021213 |
[9] | S. Vaid, P. Singh, C. Kaur, EEG signal analysis for BCI interface: A review, in Fifth International Conference on Advanced Computing & Communication Technologies, (2015), 143–147. https://doi.org/10.1109/ACCT.2015.72 |
[10] | Y. Li, X. D. Wang, M. L. Luo, K. Li, X. F. Yang, Q. Guo, Epileptic seizure classification of EEGs using time–frequency analysis based multiscale radial basis functions, IEEE J. Biomed. Health Inf., 22 (2017), 386–397. https://doi.org/10.1109/JBHI.2017.2654479 doi: 10.1109/JBHI.2017.2654479 |
[11] | J. W. Li, S. Barma, P. U. Mak, F. Chen, C. Li, M. Li, et al., Single-channel selection for EEG-based emotion recognition using brain rhythm sequencing, IEEE J. Biomed. Health Inf., 26 (2022), 2493–2503. https://doi.org/10.1109/JBHI.2022.3148109 doi: 10.1109/JBHI.2022.3148109 |
[12] | F. Lotte, C. Guan, Regularizing common spatial patterns to improve BCI designs: Unified theory and new algorithms, IEEE Trans. Biomed. Eng., 58 (2010), 355–362. https://doi.org/10.1109/TBME.2010.2082539 doi: 10.1109/TBME.2010.2082539 |
[13] | H. Ramoser, J. Muller-Gerking, G. Pfurtscheller, Optimal spatial filtering of single trial EEG during imagined hand movement, IEEE Trans. Neural Syst. Rehabil. Eng., 8 (2000), 441–446. https://doi.org/10.1109/86.895946 doi: 10.1109/86.895946 |
[14] | P. Herman, G. Prasad, T. M. McGinnity, D. Coyle, Comparative analysis of spectral approaches to feature extraction for EEG-Based motor imagery classification, IEEE Trans. Neural Syst. Rehabil. Eng., 16 (2008), 317–326. https://doi.org/10.1109/TNSRE.2008.926694 doi: 10.1109/TNSRE.2008.926694 |
[15] | B. Orset, K. Lee, R. Chavarriaga, J. Millán, User adaptation to closed-loop decoding of motor imagery termination, IEEE Trans. Biomed. Eng., 68 (2020), 3–10. https://doi.org/10.1109/TBME.2020.3001981 doi: 10.1109/TBME.2020.3001981 |
[16] | Y. Zhang, C. S. Nam, G. Zhou, J. Jin, X. Wang, A. Cichocki, Temporally constrained sparse group spatial patterns for motor imagery BCI, IEEE Trans. Cyber., 49 (2018), 3322–3332. https://doi.org/10.1109/TCYB.2018.2841847 doi: 10.1109/TCYB.2018.2841847 |
[17] | M. Lee, Y. H. Kim, S. W. Lee, Motor impairment in stroke patients is associated with network properties during consecutive motor imagery, IEEE Trans. Biomed. Eng., 69 (2022), 2604–2615. https://doi.org/10.1109/TBME.2022.3151742 doi: 10.1109/TBME.2022.3151742 |
[18] | Y. Y. Miao, J. Jin, L. Daly, C. Zuo, X. Wang, A. Cichocki, et al., Learning common time-frequency-spatial patterns for motor imagery classification, IEEE Trans. Neural Syst. Rehabil. Eng., 29 (2021), 699–707. https://doi.org/10.1109/TNSRE.2021.3071140 doi: 10.1109/TNSRE.2021.3071140 |
[19] | D. Hong, L. Gao, J. Yao, B. Zhang, A. Plaza, J. Chanussot, Graph convolutional networks for hyperspectral image classification, IEEE Trans. Geosci. Remote Sens., 59 (2021), 5966–5978. https://doi.org/10.1109/TGRS.2020.3015157 doi: 10.1109/TGRS.2020.3015157 |
[20] | C. Li, B. Zhang, D. Hong, J. Yao, J. Chanussot, LRR-Net: An interpretable deep unfolding network for hyperspectral anomaly detection, IEEE Trans. Geosci. Remote Sens., 61 (2023), 1–12. https://doi.org/10.1109/TGRS.2023.3279834 doi: 10.1109/TGRS.2023.3279834 |
[21] | J. Yao, B. Zhang, C. Li, D. Hong, J. Chanussot, Extended Vision Transformer (ExViT) for land use and land cover classification: A multimodal deep learning framework, IEEE Trans. Geosci. Remote Sens., 61 (2023), 1–15. https://doi.org/10.1109/TGRS.2023.3284671 doi: 10.1109/TGRS.2023.3284671 |
[22] | D. Hong, B. Zhang, H. Li, Y. Li, J. Yao, C. Li, et al., Cross-city matters: A multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks, Remote Sens. Environ., 299 (2023). https://doi.org/10.1016/j.rse.2023.113856 doi: 10.1016/j.rse.2023.113856 |
[23] | P. Zhang, X. Wang, W. Zhang, J. Chen, Learning spatial–spectral–temporal EEG features with recurrent 3D convolutional neural networks for cross-task mental workload assessment, IEEE Trans. Neural Syst. Rehabil. Eng., 27 (2019), 31–42. https://doi.org/10.1109/TNSRE.2018.2884641 doi: 10.1109/TNSRE.2018.2884641 |
[24] | S. Sakhavi, C. Guan, S. Yan, Learning temporal information for brain-computer interface using convolutional neural networks, IEEE Trans. Neural Networks Learn. Syst., 29 (2018), 5619–5629. https://doi.org/10.1109/TNNLS.2018.2789927 doi: 10.1109/TNNLS.2018.2789927 |
[25] | B. E. Olivas-Padilla, M. I. Chacon-Murguia, Classification of multiple motor imagery using deep convolutional neural networks and spatial filters, Appl. Soft Comput., 75 (2019), 461–472. https://doi.org/10.1016/j.asoc.2018.11.031 doi: 10.1016/j.asoc.2018.11.031 |
[26] | X. Ma, S. Qiu, H. He, Time-distributed attention network for EEG-based motor imagery decoding from the same limb, IEEE Trans. Neural Syst. Rehabil. Eng., 30 (2022), 496–508. https://doi.org/10.1109/TNSRE.2022.3154369 doi: 10.1109/TNSRE.2022.3154369 |
[27] | R. Zhang, N. L. Zhang, C. Chen, D. Y. Lv, G. Liu, F. Peng, et al., Motor imagery EEG classification with self-attention-based convolutional neural network, in 7th International Conference on Intelligent Informatics and Biomedical Science (ICⅡBMS), (2022), 195–199. https://doi.org/10.1109/ICⅡBMS55689.2022.9971698 |
[28] | J. Zheng, M. Liang, S. Sinha, L. Ge, W. Yu, A. Ekstrom, et al., Time-frequency analysis of scalp EEG with Hilbert-Huang transform and deep learning, IEEE J. Biomed. Health. Inf., 26 (2022), 1549–1559. https://doi.org/10.1109/JBHI.2021.3110267 doi: 10.1109/JBHI.2021.3110267 |
[29] | H. Fang, J. Jin, I. Daly, X. Wang, Feature extraction method based on filter banks and Riemannian tangent space in motor-imagery BCI, IEEE J. Biomed. Health. Inf., 26 (2022), 2504–2514. https://doi.org/10.1109/JBHI.2022.3146274 doi: 10.1109/JBHI.2022.3146274 |
[30] | F. Lotte, L. Bougrain, M. Clerc, Electroencephalography (EEG)-based brain-computer interfaces, in Wiley Encyclopedia of Electrical and Electronics Engineering, Wiley, (2015). https://doi.org/10.1002/047134608X.W8278 |
[31] | G. Pfurtscheller, C. Neuper, D. Flotzinger, M. Pregenzer, EEG-based discrimination between imagination of right and left hand movement, Electroencephalogr. Clin. Neurophysiol., 103 (1997), 642–651. https://doi.org/10.1016/S0013-4694(97)00080-1 doi: 10.1016/S0013-4694(97)00080-1 |
[32] | R. Chai, S. H. Ling, G. P. Hunter, Y. Tran, H. T. Nguyen, Brain–computer interface classifier for wheelchair commands using neural network with fuzzy particle swarm optimization, IEEE J. Biomed. Health. Inf., 18 (2014), 1614–1624. https://doi.org/10.1109/JBHI.2013.2295006 doi: 10.1109/JBHI.2013.2295006 |
[33] | K. K. Ang, Z. Y. Chin, H. Zhang, C. Guan, Filter bank common spatial pattern (FBCSP) in brain-computer interface, in 2008 IEEE International Joint Conference on Neural Networks, (2008), 2390–2397. https://doi.org/10.1109/IJCNN.2008.4634130 |
[34] | K. P. Thomas, C. Guan, C. T. Lau, A. P. Vinod, K. K. Ang, A new discriminative common spatial pattern method for motor imagery brain–computer interfaces, IEEE Trans. Biomed. Eng., 56 (2009), 2730–2733. https://doi.org/10.1109/TBME.2009.2026181 doi: 10.1109/TBME.2009.2026181 |
[35] | D. Hong, J. Yao, C. Li, D. Meng, N. Yokoya, J. Chanussot, Decoupled-and-coupled networks: Self-supervised hyperspectral image super-resolution with subpixel fusion, IEEE Trans. Geosci. Remote Sens., 61 (2023), 1–12. https://doi.org/10.1109/TGRS.2023.3324497 doi: 10.1109/TGRS.2023.3324497 |
[36] | Y. Yuan, G. Xun, K. Jia, A. Zhang, A multi-view deep learning framework for EEG seizure detection, IEEE J. Biomed. Health Inf., 23 (2019), 83–94. https://doi.org/10.1109/JBHI.2018.2871678 doi: 10.1109/JBHI.2018.2871678 |
[37] | D. Zhang, K. Chen, D. Jian, L. Yao, Motor imagery classification via temporal attention cues of graph embedded EEG signals, IEEE J. Biomed. Health Inf., 24 (2020), 2570–2579. https://doi.org/10.1109/JBHI.2020.2967128 doi: 10.1109/JBHI.2020.2967128 |
[38] | W. Wu, X. Gao, B. Hong, S. Gao, Classifying single-trial EEG during motor imagery by iterative spatio-spectral patterns learning (ISSPL), IEEE Trans. Biomed. Eng., 55 (2008), 1733–1743. https://doi.org/10.1109/TBME.2008.919125 doi: 10.1109/TBME.2008.919125 |
[39] | F. Qi, Y. Li, W. Wu, RSTFC: A novel algorithm for spatio-temporal filtering and classification of single-trial EEG, IEEE Trans. Neural Networks Learn. Syst., 26 (2015), 3070–3082. https://doi.org/10.1109/TNNLS.2015.2402694 doi: 10.1109/TNNLS.2015.2402694 |
[40] | D. Li, J. Xu, J. Wang, X. Fang, Y. Ji, A multi-scale fusion convolutional neural network based on attention mechanism for the visualization analysis of EEG signals decoding, IEEE Trans. Neural Syst. Rehabil. Eng., 28 (2020), 2615–2626. https://doi.org/10.1109/TNSRE.2020.3037326 doi: 10.1109/TNSRE.2020.3037326 |
[41] | K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, arXiv preprint, (2015), arXiv: 1512.03385. https://doi.org/10.48550/arXiv.1512.03385 |
[42] | D. Arthur, S. Vassilvitskii, k-means++: The advantages of careful seeding, in Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, (2007), 1027–1035. |
[43] | K. K. Ang, Z. Y. Chin, C. Wang, C. Guan, H. Zhang, Filter bank common spatial pattern algorithm on BCI competition Ⅳ Datasets 2a and 2b, Front. Neurosci., 6 (2012), 39. https://doi.org/10.3389/fnins.2012.00039 doi: 10.3389/fnins.2012.00039 |
[44] | R. T. Schirrmeister, J. T. Sprongenberg, L. D. J. Fiederer, M. Glasstetter, K. Eggensperger, M. Tangermann, et al., Deep learning with convolutional neural networks for EEG decoding and visualization, Hum. Brain Mapp., 38 (2017), 5391–542. https://doi.org/10.1002/hbm.23730 doi: 10.1002/hbm.23730 |
[45] | X. Zhao, H. Zhang, G. Zhu, F. You, S. Kuang, L. Sun, A multi-branch 3D convolutional neural network for EEG-based motor imagery classification, IEEE Trans. Neural Syst. Rehabil. Eng., 27 (2019), 2164–2177. https://doi.org/10.1109/TNSRE.2019.2938295 doi: 10.1109/TNSRE.2019.2938295 |
[46] | R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, Grad-CAM: Visual explanations from deep networks via gradient-based localization, in 2017 IEEE International Conference on Computer Vision (ICCV), (2017), 618–626. https://doi.org/10.1109/ICCV.2017.74 |
[47] | D. Hong, N. Yokoya, J. Chanussot, X. Zhu, An augmented linear mixing model to address spectral variability for hyperspectral unmixing, IEEE Trans. Image Process., 28 (2019), 1923–1938. https://doi.org/10.1109/TIP.2018.2878958 doi: 10.1109/TIP.2018.2878958 |
[48] | R. K. Meleppat, C. R. Fortenbach, Y. Jian, E. S. Martinez, K. Wagner, B. S. Modjtahedi, et al., In vivo imaging of retinal and choroidal morphology and vascular plexuses of vertebrates using swept-source optical coherence tomography, Transl. Vision Sci. Technol., 11 (2022), 11. https://doi.org/10.1167/tvst.11.8.11 doi: 10.1167/tvst.11.8.11 |
[49] | K. M. Ratheesh, L. K. Seah, V. M. Murukeshan, Spectral phase-based automatic calibration scheme for swept source-based optical coherence tomography systems, Phys. Med. Biol., 61 (2016), 7652–7663. https://doi.org/10.1088/0031-9155/61/21/7652 doi: 10.1088/0031-9155/61/21/7652 |
[50] | R. K. Meleppat, E. B. Miller, S. K. Manna, P. Zhang, E. N. Pugh, R. J. Zawadzki, Multiscale hessian filtering for enhancement of OCT angiography images, in Ophthalmic Technologies XXIX, (2019), 64–70. https://doi.org/10.1117/12.2511044 |
[51] | R. K. Meleppat, P. Prabhathan, S. L. Keey, M. V. Matham, Plasmon resonant silica-coated silver nanoplates as contrast agents for optical coherence tomography, J. Biomed. Nanotechnol., 12 (2016), 1929–1937. https://doi.org/10.1166/jbn.2016.2297 doi: 10.1166/jbn.2016.2297 |