Loading [MathJax]/jax/output/SVG/jax.js
Research article

NeuroWave-Net: Enhancing epileptic seizure detection from EEG brain signals via advanced convolutional and long short-term memory networks

  • Received: 03 January 2024 Revised: 25 March 2024 Accepted: 25 March 2024 Published: 15 April 2024
  • This study presented a new approach to seizure classification utilizing electroencephalogram (EEG) data. We introduced the NeuroWave-Net, an innovative hybrid model that seamlessly integrates convolutional neural networks (CNN) and long short-term memory (LSTM) architectures. Unlike conventional methods, our model capitalized on CNN's proficiency in feature extraction and LSTM's prowess in classifying seizure. The key strength of the NeuroWave-Net lies in its ability to combine these distinct architectures, synergizing their capabilities for enhanced accuracy in identifying seizure conditions within EEG data. Our proposed model exhibited outstanding performance, achieving a classification accuracy of 99.48%. This study contributed to the advancement of seizure classification models, providing a robust and streamlined approach for accurate categorization within EEG datasets. NeuroWave-Net stands as a testament to the potential of hybrid neural network architectures in neurological diagnostics.

    Citation: Md. Mehedi Hassan, Rezuana Haque, Sheikh Mohammed Shariful Islam, Hossam Meshref, Roobaea Alroobaea, Mehedi Masud, Anupam Kumar Bairagi. NeuroWave-Net: Enhancing epileptic seizure detection from EEG brain signals via advanced convolutional and long short-term memory networks[J]. AIMS Bioengineering, 2024, 11(1): 85-109. doi: 10.3934/bioeng.2024006

    Related Papers:

    [1] Wallace Camacho Carlos, Alessandro Copetti, Luciano Bertini, Leonard Barreto Moreira, Otávio de Souza Martins Gomes . Human activity recognition: an approach 2D CNN-LSTM to sequential image representation and processing of inertial sensor data. AIMS Bioengineering, 2024, 11(4): 527-560. doi: 10.3934/bioeng.2024024
    [2] Kuna Dhananjay Rao, Mudunuru Satya Dev Kumar, Paidi Pavani, Darapureddy Akshitha, Kagitha Nagamaleswara Rao, Hafiz Tayyab Rauf, Mohamed Sharaf . Cardiovascular disease prediction using hyperparameters-tuned LSTM considering COVID-19 with experimental validation. AIMS Bioengineering, 2023, 10(3): 265-282. doi: 10.3934/bioeng.2023017
    [3] Abdulmajeed Alsufyani . Performance comparison of deep learning models for MRI-based brain tumor detection. AIMS Bioengineering, 2025, 12(1): 1-21. doi: 10.3934/bioeng.2025001
    [4] Kayode Oshinubi, Augustina Amakor, Olumuyiwa James Peter, Mustapha Rachdi, Jacques Demongeot . Approach to COVID-19 time series data using deep learning and spectral analysis methods. AIMS Bioengineering, 2022, 9(1): 1-21. doi: 10.3934/bioeng.2022001
    [5] Lamia Fatiha KAZI TANI, Mohammed Yassine KAZI TANI, Benamar KADRI . Gas-Net: A deep neural network for gastric tumor semantic segmentation. AIMS Bioengineering, 2022, 9(3): 266-282. doi: 10.3934/bioeng.2022018
    [6] Baraa Chasib Mezher AL Kasar, Shahab Khameneh Asl, Hamed Asgharzadeh, Seyed Jamaleddin Peighambardoust . Denoising deep brain stimulation pacemaker signals with novel polymer-based nanocomposites: Porous biomaterials for sound absorption. AIMS Bioengineering, 2024, 11(2): 241-265. doi: 10.3934/bioeng.2024013
    [7] Thi Kim Thoa Thieu, Roderick Melnik . Coupled effects of channels and synaptic dynamics in stochastic modelling of healthy and Parkinson's-disease-affected brains. AIMS Bioengineering, 2022, 9(2): 213-238. doi: 10.3934/bioeng.2022015
    [8] Akira Masuo, Takuto Sakuma, Shohei Kato . Performance evaluation of brain state discrimination using near-infrared spectroscopy for brain-computer interface: an exploratory case study. AIMS Bioengineering, 2024, 11(2): 173-184. doi: 10.3934/bioeng.2024010
    [9] Marianthi Logotheti, Eleftherios Pilalis, Nikolaos Venizelos, Fragiskos Kolisis, Aristotelis Chatziioannou . Development and validation of a skin fibroblast biomarker profile for schizophrenic patients. AIMS Bioengineering, 2016, 3(4): 552-565. doi: 10.3934/bioeng.2016.4.552
    [10] Moawiah M Naffaa . Innovative therapeutic strategies for traumatic brain injury: integrating regenerative medicine, biomaterials, and neuroengineering. AIMS Bioengineering, 2025, 12(1): 90-144. doi: 10.3934/bioeng.2025005
  • This study presented a new approach to seizure classification utilizing electroencephalogram (EEG) data. We introduced the NeuroWave-Net, an innovative hybrid model that seamlessly integrates convolutional neural networks (CNN) and long short-term memory (LSTM) architectures. Unlike conventional methods, our model capitalized on CNN's proficiency in feature extraction and LSTM's prowess in classifying seizure. The key strength of the NeuroWave-Net lies in its ability to combine these distinct architectures, synergizing their capabilities for enhanced accuracy in identifying seizure conditions within EEG data. Our proposed model exhibited outstanding performance, achieving a classification accuracy of 99.48%. This study contributed to the advancement of seizure classification models, providing a robust and streamlined approach for accurate categorization within EEG datasets. NeuroWave-Net stands as a testament to the potential of hybrid neural network architectures in neurological diagnostics.


    Abbreviations

    EEG:

    Electroencephalogram; 

    CNN:

    Convolutional neural network; 

    LSTM:

    Long short-term memory; 

    RF:

    Random forest; 

    SVM:

    Support vector machine; 

    PSO:

    Particle swarm optimization; 

    1D CNN:

    1-Dimensional convolutional neural networks; 

    2D CNN:

    2-Dimensional convolutional neural networks; 

    DCNN:

    Deep neural network; 

    iEEG:

    Intracranial electroencephalography; 

    C-LSTM:

    Contextual long short-term memory; 

    BiLSTM:

    Bidirectional long short-term memory; 

    ReLU:

    Rectified linear unit; 

    Conv1D:

    1D convolution layer

    The term “seizure” finds its origins in ancient Greek, signifying the act of taking hold. According to Fisher et al.[1], an epileptic seizure is a temporary manifestation of signs and symptoms attributed to abnormal and excessive neuronal activity in the brain. To be classified as epileptic, an individual must have experienced more than one epileptic seizure. Seizures possess a distinct and definable onset and conclusion due to their transient nature. Specifically, a partial seizure is characterized by synchronous neuronal activity originating from a single cerebral hemisphere. Over time, these partial seizures can progress into generalized seizures, where neuronal discharge emerges from both cerebral hemispheres. Both generalized seizures and a specific type of partial seizure referred to as partial complex seizures can result in a loss of consciousness [2].

    Seizures can manifest suddenly, placing the individual in a vulnerable state where they cannot ensure their own safety. Depending on the circumstances, a generalized seizure could potentially be life-threatening [3]. The associated constraints and unpredictability significantly impact daily life, potentially leading to adverse psychological effects for the individual. Therefore, the development of a real-time prediction device holds promise in mitigating anxiety and enabling proactive measures to reach a safe environment before the onset of a seizure. A majority of models are being developed based on brain signals obtained through electroencephalography (EEG) [4]. However, the validation of EEG data containing seizures necessitates clinical authentication by a neurologist, incurring substantial expenses. Consequently, there is a scarcity of openly accessible data in this critical domain. Nonetheless, in the past decade, there has been a notable increase in the availability of open source datasets [5]. These datasets have played a pivotal role in advancing seizure prediction research within the scientific community, serving as a standardized benchmark for evaluating algorithmic performance [6]. EEG, a medical apparatus, quantifies the relative potential difference between two electrodes positioned on the scalp, typically measured in micro-volts (µV). When clusters of neurons discharge synchronously, they generate electrical fields at the mesoscopic to microscopic scales, detectable by electrodes situated across different regions of the scalp [7]. The mesoscopic scale encompasses an area sufficient to span patches of the cortex, while the macroscopic scale can encompass entire cortical regions. However, activities occurring at the microscopic level, including the firing of individual neurons, produce weak electrical fields that fall below the detection threshold of EEG [8]. This limitation arises from the inherent characteristics of electrical fields, which diminish in strength exponentially with increasing distance. It is crucial to emphasize that comparing raw voltage data for different patients is discouraged by neurologists due to inherent physiological variations among patients [9]. Factors such as scalp thickness and procedural disparities in electrode placement contribute to this diversity, underscoring the necessity for standardized approaches in EEG analysis for accurate and meaningful comparisons in research and clinical practice. EEG is an electrophysiological technique used to monitor and record the electrical activity within the brain. The brain is divided into four main lobes: the frontal lobe, parietal lobe, temporal lobe, and occipital lobe [10]. These lobes perform distinct functions and emit various rhythmic waves during different actions. Figure 1 illustrates the visualization of the brain lobes.

    Figure 1.  (a) The brain comprises distinct regions known as lobes, namely the frontal lobe, parietal lobe, occipital lobe, and temporal lobe, each with its unique functions and characteristics. (b) The 10-20 EEG placement for collecting brain signal data from human brain.

    While numerous studies have explored epilepsy seizure prediction using a variety of machine learning techniques and deep learning models, a prevalent challenge has been the limitations in model performance, particularly with the reliance on shallow machine learning models. For instance, in the work by Zhao et al. [11] on the Bonn dataset, a convolutional neural networks (CNN) model was applied, achieving a commendable 96.97% accuracy. Conversely, Nishad et al. [12] tackled the same dataset using an random forest (RF) model, surpassing previous results with an impressive 99% accuracy. Another research study [13] also achieved 99% accuracy in detecting epilepsy from EEG signals by leveraging the eigenvalues of the Hankel matrix. Additionally, study [14] focused on feature optimization through the particle swarm optimization (PSO) algorithm, applying an support vector machine (SVM) model to the Bonn dataset and achieving a noteworthy 99% in model performance. Despite these advancements, the overarching limitation persists, as most studies continue to rely on shallow models.

    In our research, we recognize and address these limitations by introducing hybrid deep learning models. By integrating the strengths of both deep learning architectures and leveraging innovative techniques, our study aims to significantly enhance predictive accuracy in epilepsy seizure prediction. The main contributions of this study:

    • We utilized the Bonn dataset, a comprehensive repository of brain signal data. This dataset, known for its relevance and richness in neurological information, served as the foundational resource for our study.

    • Our study systematically assesses the performance of 1D CNN and 2D CNN models on the signal dataset, providing valuable insights into the effectiveness of each approach.

    • We introduce the NeuroWave-Net, a novel model that combines CNN and long short-term memory (LSTM) architectures with careful hyperparameter tuning. This hybrid model is designed to optimize performance in analyzing complex datasets.

    • Our contribution extends to providing a detailed examination of the proposed NeuroWave-Net, offering mathematical expressions and pseudocode. This comprehensive analysis facilitates a deeper understanding of the model and its potential applications.

    • To gauge the effectiveness of our proposed model, we conduct a comparative analysis against published works in the field. This benchmarking exercise helps highlight the strengths and contributions of our NeuroWave-Net in the context of existing research.

    In the methodology section, we present the proposed model of this study. The subsequent experimental exploration section details the outcomes of the model's performance. The comparison section discusses the novelty of our approach and compares it with various published papers.

    In this work [15], the epileptic condition is classified using transfer learning. via the use of two distinct datasets for benchmarking, namely the iNeuro EEG and CHB-MIT (Children's Hospital Boston (CHB) and the Massachusetts Institute of Technology (MIT)) databases. There are several types of seizures in the dataset. They obtained accuracy of 96.7% and 87.87% for the five state epileptic classification using CHB-MIT and iNeuro EEG datasets, respectively, by using deep neural network (DNN) models. The primary aim was to furnish accurate evaluation of intracranial electroencephalography (iEEG) data in order to facilitate surgical intervention or aid in the treatment of drug-resistant epilepsy. In the research [16], the effectiveness of the proposed model was validated using two publicly available benchmark iEEG datasets. In addition, a comprehensive, nonpublic clinical stereo EEG database was utilized to provide additional validation. The provided sources failed to furnish specific details regarding the datasets' titles or sizes. On the Bern-Barcelona database, the performance evaluation of the proposed multibranch deep learning fusion model yielded noteworthy results with respect to sensitivity (97.78%), accuracy (97.60%), and specificity (97.42%). The outcomes of this study surpass the capabilities of presently accessible cutting-edge methodologies. Furthermore, when implemented on a clinical dataset, the research demonstrated an intra-subject accuracy of 92.53% and a cross-subject accuracy of 88.03%. The results suggest that the proposed approach is an effective and robust method for determining the source of (iEEG) data.

    The study detailed in this research aimed to pinpoint individuals affected by epileptic seizures and brain tumors through an analysis of brain signals, as elucidated in this study [17]. The primary thrust of this investigation revolved around the automated extraction of features to enhance classification performance. Utilizing the five-class dataset, the researchers deployed the convolutional long short-term memory (C-LSTM) model to gauge the model's efficacy. Impressively, the C-LSTM model exhibited a high accuracy level of 98.80%, showcasing its capability in effectively categorizing the subjects. What's noteworthy is its remarkable ability to deliver predictions within 0.006 seconds, with a detection time as swift as one second. These findings underscore the robustness and swiftness of the C-LSTM model in accurate and rapid assessments of brain signal data for identifying epileptic seizures and brain tumors. In the present study [18] the short-time Fourier transform (STFT) was leveraged to optimize the processing of EEG signal data, significantly enhancing the model's efficiency. The study initially involved the application of STFT to analyze time series data, paving the way for a meticulous focus on feature extraction techniques. Subsequently, the dataset was classified using both CNN and Bidirectional LSTM (Bi-LSTM) models. The findings demonstrated the Bi-LSTM model's superior performance, achieving an accuracy rate of 97.2%, showcasing its efficacy in accurately categorizing the EEG data. In comparison, the CNN model, while proficient, attained an accuracy level of 93.9%. Furthermore, the author proposed a hardware architecture specifically tailored for the STFT model, boasting an impressively low maximum error rate of merely 0.13 percent. These results underscore the potential of the Bi-LSTM model and the significance of STFT in refining EEG data analysis.

    In article [19] the exploration delved into employing transfer learning techniques for the classification of brain signal data derived from EEG readings. Utilizing a dataset with brain signal durations of 5 minutes, the researchers orchestrated a combined model involving DenseNet and LSTM. Their findings showcased impressive outcomes, achieving specificities of 93.28% and 93.65%, along with a sensitivity rate of 92.92%. Notably, this performance notably surpassed prior research, underscoring the robustness and effectiveness of the hybrid model in predicting seizures.

    In research [20] the primary emphasis centered on EEG data analysis for the classification of epilepsy. The approach revolved around a hybrid model amalgamating CNNs and LSTM. This LSTM-CNN model achieved a noteworthy accuracy rate of 98%. Moreover, the model exhibited a high specificity of 99.56% alongside a recall rate of 92.02%. These results accentuate the efficacy and potential of the LSTM-CNN hybrid model in accurately categorizing epilepsy within EEG datasets.

    The study [21] aimed to improve automatic epileptic seizure detection accuracy by introducing a new random forest model with grid search optimization. It categorized EEG data into three groups: healthy subjects, seizure-free intervals, and seizure activity. The dataset included both simulated and real clinical EEG data. The results showed high accuracy (96.7%) in classifying these categories. However, noisy EEG signals may have an impact on the model's performance.

    Another study [22] presents a time-frequency representation (TFR) method based on improved eigenvalue decomposition of the Hankel matrix and Hilbert transform (IEVDHM–HT). The proposed TFR method was evaluated using synthetic signals and real epileptic seizure EEG signals. The EEG dataset was obtained from the publicly available database at the University of Bonn, Germany. The study achieved 100% accuracy in classifying epileptic seizures and seizure-free EEG signals using least-square support vector machines (LS-SVM) with radial basis function kernels. Common TFR challenges, such as sensitivity to noise and parameter tuning, may affect IEVDHM–HT.

    Another study [23] introduced an automated method to classify epileptic EEG signals using iterative filtering (IF), demonstrating better accuracy than the empirical mode decomposition (EMD). The authors also utilized EEG data from both healthy individuals and epileptic patients from the University of Bonn. The model achieved up to 99.5% accuracy in identifying normal, seizure, and seizure-free states with a random forest classifier.

    A study [24] focused on classifying epileptic EEG signals using signal transforms and CNNs. They employed the Bern-Barcelona EEG and epileptic seizure recognition datasets and used Fourier, wavelet, and EMD transforms for input generation. The findings showed high accuracy, up to 98.9% for certain signal types and up to 99.5% for finding seizures.

    In this study, we harnessed an EEG dataset as the foundational resource for developing a model. Initially, a CNN model was employed for individual classification purposes. However, our innovation, termed NeuroWave-Net, introduces a novel approach by amalgamating CNN and LSTM into a cohesive hybrid model. The crux of our proposed system lies in leveraging the strengths of CNN for feature extraction while harnessing LSTM's capabilities for the actual classification of seizure diseases. This combined model, NeuroWave-Net, brings together the two different architectures to make the model work better and be more accurate at finding seizure conditions in EEG data. The whole method and process is shown in Figure 2, which is a flowchart that shows how CNN-based feature extraction and the LSTM-driven classification process are combined. This comprehensive framework ensures a robust and streamlined approach to accurately categorizing seizures within the EEG dataset.

    Figure 2.  The working process of the proposed model of this study.

    In the architecture of our proposed model, named NeuroWave-Net, we initiate with the incorporation of a foundational convolutional layer featuring 64 filters. The kernel size is set to 3, and the rectified linear unit (ReLU) activation function is applied to introduce nonlinearity. Subsequently, a max-pooling layer is introduced to down-sample the spatial dimensions of the convolutional output.

    Following this, additional 1D convolutional layers (Conv1D) are implemented, progressively increasing the number of filters to 128, 512, and 1024. Each of these layers contributes to the extraction of hierarchical features from the input data. Within the dense layer, 256 filters are employed to further enhance the model's ability to capture intricate patterns and relationships in the data. A pivotal inclusion in the model architecture is the LSTM layer, strategically positioned to capture temporal dependencies in the sequential data. The LSTM layer introduces memory mechanisms that prove instrumental in understanding the context of the input sequence.

    The model culminates in a final dense layer, readying the extracted features for classification. This dense layer serves as the output layer, shaping the model's ability to categorize input data effectively. The comprehensive architecture, carefully orchestrated with convolutional, pooling, dense, and LSTM layers, embodies the essence of the NeuroWave-Net, ensuring a robust framework for the accurate classification of seizure diseases within EEG data. The pseudocode of the proposed model is shown in Algorithm 1, and all used parameter values and descriptions are shown in Table 1.

    Table Algorithm 1.  Pseudocode for the proposed Neurowave-Net architecture system.
    Data preprocessing
     1. Load EEG signal dataset D and convert labels to binary.
     2. Remove unnecessary columns and split the dataset (Dtrain 75%, Dtest 25%).

    Model initialization
     1. Initialize a sequential model with input shape (T, C), where T is the time steps (178) and C is the number of channels (1).

    Model architecture
     1. Input layer EEG signal with shape (T, C).
     2. Add Conv1D layer with F1 filters, kernel size K1, ReLU activation (ReLU), and padding ‘same’.
     3. Add MaxPooling1D layer (P1, S1).
     4. Add Dropout layer (D1).
     5. Add Conv1D layer with F2, K2, ReLU activation, and padding ‘same’.
     6. Add Conv1D layer with F3, K3, ReLU activation, and padding ‘same’.
     7. Add Conv1D layer with F4, K4, ReLU activation, and padding ‘same’.
     8. Add Dense layer with N1 neurons, ReLU activation.
     9. Add Dropout layer (D2).
     10. Add LSTM layer with U1 units, return sequences=True.
     11. Add LSTM layer with U2 units.

    Final layers
     1. Add Dense (N2, ReLU).
     2. Add Dense (N3, ReLU).
     3. Add Dense (N4, ReLU).
     4. Add Dropout layer (D3).
     5. Add Dense (N3, ReLU).

    Model compilation
     1. Compile the model using binary cross-entropy loss, Adam optimizer (α), and accuracy, precision, F1, recall metric.

     | Show Table
    DownLoad: CSV
    Table 1.  Parameter details with value and details.
    Parameter Value Details
    D - EEG signal dataset
    Dtrain 75% Training dataset
    Dtest 25% Testing dataset
    T 178 Time steps
    C 1 Number of channels
    F1, F2, F3, F4 64, 128, 512, 1024 Number of filters in Conv1D layers
    K1, K2, K3, K4 3 Kernel size in Conv1D layers
    D1, D2, D3 0.2 Dropout rate
    N1, N2, N3, N4 256, 256, 128, 64 Number of neurons in Dense layers
    U1, U2 64, 64 Number of units in LSTM layers
    P1 2 Pool size in MaxPooling1D layer
    S1 2 Stride in MaxPooling1D layer
    α 0.001 Learning rate

     | Show Table
    DownLoad: CSV

    The proposed deep learning model is designed for sequence data, with a focus on time series analysis or sequence classification. It begins with a CNN layer to extract hierarchical features, followed by MaxPooling for dimensionality reduction. The model then employs a series of Conv1D layers with increasing filters for sophisticated feature extraction. Dense layers with ReLU activation contribute to further feature processing. To capture temporal dependencies, LSTM layers are incorporated, enhancing the model's ability to understand sequential patterns. Dropout is utilized for regularization, preventing overfitting. The model concludes with densely connected layers, progressively reducing the number of neurons. The final layer features a single neuron with a sigmoid activation, suitable for binary classification tasks. The model is compiled using binary cross-entropy loss, an Adam optimizer with a learning rate of 0.001, and accuracy as the evaluation metric. This combination of convolutional and recurrent layers, along with strategic dropout, aims to create a robust model for effective sequence analysis and classification. The proposed model architecture of this study is shown layer-wise in Figure 3.

    Figure 3.  The model architecture of 1D CNN-LSTM model.

    This study utilizes EEG data sourced from a dataset [25] curated by researchers at Bonn University in Germany. These EEG signals are noninvasive and captured using a 12-bit analog-to-digital converter from a 128-channel amplifier device. Each dataset comprises 100 individual EEG signals, each containing 4097 sample points. The duration of each signal is 23.6 seconds, with a sampling rate of 173.61 hertz. In this investigation, the five distinct collections of EEG signals within the dataset are denoted as sets A, B, C, D, and E.

    Sets ‘A’ and ‘B’ consist of recordings from healthy subjects with open and closed eyes, respectively. The remaining three sets feature the waveforms of epileptic subjects. Sets ‘C’ and ‘D’ encompass interictal signals recorded during periods without seizures. Specifically, set ‘C’ includes EEG signals obtained from outside the epileptogenic zone, while set ‘D’ is composed of EEG signals recorded within the epileptogenic zone. Set ‘E’ captures authentic seizure waveforms. For analytical purposes, all five sets are included in our investigation. Each set represents a distinct class, and within each class, there are 100 samples, each containing 4097 data points.

    1D CNN is a variation of 2D CNN, which is commonly used for image processing. 1D CNN is designed to work with one-dimensional data such as sound waves, time-series data from sensors, or even text when formatted as a sequence [26]. The convolution operation is the core concept of 1D CNN. The convolution layers can be followed by max-pooling layers and lastly the fully connected layers. Convolution layers use a one-dimensional convolution operation. In the convolution operation, an M-sized input vector d passes through an N-sized filter or kernel vector k [27]. This procedure involves the kernel sliding across the input vector, doing element-wise multiplication on the input it covers at the position, and finally adding the products of these operations. This operation is performed across the entire length of the input vector and captures all the important patterns from the data. This convolution produces another one-dimensional vector, r. The length of output vector r is (M-N+1) if zero-padding is not used. The input vector and filter lengths determine this output length. The mathematical equation of the convolution operation is:

    r(j)=f(N1i=0k(i)d(ji)+b),j=0,1,,M1,

    where, b is the bias and f is the nonlinear function. Bias helps the neural network better fit the data by allowing the activation function to be shifted and the nonlinear functions help the neural network to capture more complex features. The max-pooling layers are used to reduce the spatial dimensions of the feature maps resulting in a down-sampled version that retains the most significant features [28]. This layer is important for reducing time and complexity. The max-pooling layers are used to reduce the spatial dimensions of the input feature maps. This leads to a reduced computational complexity and training time for the network as well as a down-sampled version of the feature maps that keep the most important features. The equation of max-pooling can be written by:

    p=max(w(n×1,s),v),

    where, the operation max is used to generate the output vector p from the input vector v using the kernel window function w with size n×1 and stride s. After the convolution layer and max-pooling layer, the fully connected layer takes the flattened feature maps and connects it to a set of neurons. These neurons interpret the features extracted by the convolution layers and make predictions based on them, which are refined throughout the training process. Neurons in this layer connect to all activations from the previous layer and it uses ReLU to introduce nonlinearity. The last fully connected layer usually has the same number of output classes as input classes [29]. It uses an activation function like softmax for multi-class classification or sigmoid for binary classification. Back-propagation changes the network's weights to improve training predictions after calculating the loss from the output [30]. During training the loss function is used to measure the difference between the predicted probabilities and the actual labels. The cross-entropy loss function is:

    E=Qk=1yklog(ˆyk),

    where, E is the cross-entropy loss for the given inputs and model predictions, Q is the total number of possible classes, yk is the true label for the kth class, and ˆyk is the predicted probability that the input belongs to the kth class. Figure 4 represents a basic architecture of 1D CNN. Here the input EEG signal passes through the 1D CNN layer, followed by one max-pooling layer and a fully connected layer.

    Figure 4.  Model architecture of 1D-CNN model.

    LSTM networks are a type of necurrent neural network (RNN) that can learn long-term dependencies. LSTMs have a separate memory cell that can store information over extended time intervals [31]. It has three gates that control the flow of information. The purpose of a memory cell is to store and discard information based on the input. There are three gates that control the flow of information. The forget gate is responsible for which information will be discarded. The input gate adds information to the memory cell [32]. The output gate determines the next hidden state, which contains information based on the updated cell state. Given the input vector at time t as zt, the previous hidden state as ηt−1, and the previous cell state and new cell state as δt−1 and γt, respectively , the LSTM gate updates are as follows:

    αt=σ(Vαzt+Vηαηt1+dα)    (forget gate output)

    βt=σ(Vβzt+Vηβηt1+dβ)    (input gate output)

    γt=tanh(Vγzt+Vηγηt1+dγ)    (cell input activation)

    δt=αtδt1+βtγt    (cell state update)

    θt=σ(Vθzt+Vηθηt1+dθ)    (output gate output)

    ηt=θttanh(δt)    (hidden state output)

    Where, αt, βt, and θt are the outputs of the forget, input, and output gates, respectively; γt represents the candidate cell state; δt is the new cell state; ηt is the new hidden state; V terms are the weight matrices; d terms are the biases; σ and tanh are the sigmoid and hyperbolic tangent activation functions, respectively; and ⊙ represents element-wise multiplication. The LSTM can capture long-term dependencies in sequential data because of its complex combination of gates and activations, which control information flow.

    Our proposed model is a combination of 1D CNN and LSTM. It can handle sequence data that needs patterns to be extracted along with the sequence. The primary function of the 1D CNN is to identify and extract local patterns or features in the sequence data by applying filters across the EEG brain signal data. We used 1D CNNs followed by max-pooling in our proposed model to minimize the spatial dimensions, making it computationally effective for the deeper layers of the network. The purpose of LSTM is to capture temporal dependencies. LSTMs are capable of remembering information over long sequences [33].

    The 1D CNN layers first extract the patterns within the data, reducing dimensionality and highlighting important features. After the data has been processed, the LSTM layers analyze it within the context of its sequence, taking into account both the extracted features and the temporal dependencies.

    To evaluate the performance of our proposed model, we employed a set of evaluation metrics like accuracy, precision, recall, and F1-score.

    Accuracy is a fundamental metric. It measures the proportion of total predictions that were correct. It is useful where classes are well balanced. Accuracy (A) can be defined as,

    A=TP+TNTP+TN+FP+FN,

    where TP stands for properly anticipated positive observations, TN for correctly predicted negative observations, FP for mistakenly forecasted positive observations, and FN for wrongly predicted negative observations. The goal of combining recurrent and convolutional layers with selective dropout is to build a strong model that can effectively analyze and classify sequences.

    Precision measures the correctness achieved in the positive class. It is defined as the ratio of correctly predicted positive observations to the total predicted positive observations. In mathematical terms, precision (P) is:

    P=TPTP+FP,

    where, TP represents the count of instances correctly identified as positive and FP represents the count of instances wrongly identified as positive.

    Recall, also referred to as sensitivity, quantifies the percentage of actual positives that the model correctly identifies. The formula for Recall (R) is:

    R=TPTP+FN,

    where TP represents the count of instances correctly identified as positive and FN represents the count of positive instances that the model incorrectly classified as negative.

    The F1-Score is the harmonic mean of precision and recall. It provides a more robust measure than examining either precision or recall alone. It is particularly beneficial when dealing with imbalanced datasets. The formula for the F1-score(F1) is given by:

    F1=2×Precision×RecallPrecision+Recall,

    Within our dataset, we encounter a classification challenge involving five distinct classes. Notably, one of these classes pertains to individuals affected by seizures, while the remaining four classes consist of subjects unaffected by seizure conditions. To vividly illustrate the composition of these five classes, we have visualized the data in Figure 5, providing a comprehensive and visually accessible representation of the diverse categories present in our dataset. This visualization serves as a crucial first step in understanding the distribution and interplay of seizure and non-seizure instances within the dataset, setting the stage for further analyses and model development.

    Figure 5.  The visualization of brain signal based on different classes.

    In our dataset, we undertook a binary classification approach, categorizing instances into two distinct classes: one representing seizure patients and the other encompassing non-seizure participants. To achieve this, we designated the seizure patient data as one class and amalgamated the data from other participants into the non-seizure class. The resulting distribution of this binary classification is visually presented in Figure 6.

    Figure 6.  The visualization of brain signal of (a) epileptic patients (b) non-epileptic patients (c) epileptic and non-epileptic patients.

    We applied a 1D-CNN model to classify our dataset, and the training accuracy curve reveals interesting patterns. Initially, the accuracy demonstrates a noticeable increase, peaking around 40 epochs. However, a noteworthy observation is that after approximately 65 epochs, the training accuracy plateaus, suggesting a potential saturation of learning. The test accuracy curve, in contrast to the training accuracy, exhibits a consistent flatline after the initial rise. This divergence between the training and test accuracy indicates a potential overfitting scenario, where the model performs exceptionally well on the training data but struggles to generalize effectively to unseen data. The overall training loss for this model is impressively low, measuring at 0.0217. This indicates that the model has successfully minimized the error during the training process. However, the discrepancy between the training and test accuracy curves prompts further investigation into the model's generalization capabilities. Despite the observed plateau in test accuracy, the model demonstrates a commendable overall accuracy rate of 94.29%. Figure 7 visually represents the model's performance, showcasing its accuracy and loss trends over the epochs.

    Figure 7.  1D CNN model performance per epochs in seizure detection (a) accuracy (b) model loss.

    In an effort to enhance the performance of our 1D-CNN model, we incorporated an additional dropout layer. However, intriguingly, the observed outcome did not reflect a significant improvement in model performance. The accuracy remained consistent at 94.29%, mirroring the previous model's behavior, while the model loss exhibited an increase. The addition of a dropout layer typically serves as a regularization technique to mitigate overfitting by introducing randomness during training. Despite this intended purpose, the lack of substantial improvement in accuracy suggests that the model might not be exhibiting signs of overfitting that can be effectively addressed by dropout regularization. Figure 8 visually encapsulates the performance dynamics of the extended 1D-CNN model, showcasing the accuracy and loss trends over the epochs. The divergence between accuracy and loss trends prompts a nuanced evaluation of the dropout layer's impact on the model's learning dynamics.

    Figure 8.  1D CNN model performance per epochs in seizure detection (a) accuracy (b) model loss.

    We have implemented the 2D-CNN model to assess its performance in comparison to the 1D-CNN model. However, the 2D-CNN model did not exhibit the same level of effectiveness, yielding an accuracy of 98.12%. The performance of this model is visually represented in Figure 9. Further details on the comparative analysis and insights from this evaluation are elaborated in the subsequent sections.

    Figure 9.  Model performance with comparison of accuracy and validation of 2D CNN model.

    The evaluation process involved the integration of two distinct convolution layers to gauge their impact on model performance. The convolution units, namely 32, 64, 128, 256, and 512, were employed randomly in both layers. This approach was adopted to conduct experimental analyses, aiming to understand how varying CNN model configurations could enhance dataset performance. Remarkably, the training accuracy of this model demonstrated commendable results on the dataset, coupled with minimal loss. The training accuracy of this model is shown to Figure 10 and the loss of training value is shown to Figure 11. However, the testing model's performance resembled that of the 1D-CNN model. Notably, for the first convolution layer, the model achieved a performance with 32 units, while the second convolution layer utilized 128 units. The accuracy of this model, considering different convolution values, was documented at 98.79%. A visual representation of the model's performance based on diverse convolutions values is depicted in Figure 12. The subsequent sections provide detailed insights into the experimental analyses and their implications. The loss of this test model is shown to Figure 13.

    Figure 10.  Model performance with comparison of training accuracy of 2D CNN model on different parameters.
    Figure 11.  Model performance with comparis on of training loss of 2D CNN model on different parameters.
    Figure 12.  Model performance with comparison of testing accuracy of 2D CNN model on different parameters.
    Figure 13.  Model performance with comparison of testing loss of 2D CNN model on different parameters.

    Our developed model, named NeuroWave-Net, is a fusion of 1D-LSTM-CNN architecture, and its application to our dataset has yielded remarkable results. The model has demonstrated exceptional performance, boasting an impressive overall accuracy of 99.48%. This places NeuroWave-Net at the forefront in terms of effectiveness when compared to other models previously analyzed. Upon closer examination of its performance curve, a noteworthy observation is that the testing accuracy experiences a discernible improvement after the 20th epoch. This indicates a period of refinement where the model fine-tunes its parameters to better align with the intricacies of the dataset. An intriguing aspect is the stability achieved in accuracy post the 80th epoch, suggesting robust generalization and consistent predictive capability. Figure 14 illustrates the accuracy trend, showcasing the model's evolution over epochs. The upward trajectory followed by a stable plateau after a certain point further emphasizes its ability to learn and adapt over time. It also captures the loss curve, which not only substantiates the model's accuracy but also underscores its efficiency. The consistently low loss values signify the model's adeptness in minimizing errors during the training process.

    Figure 14.  1D CNN-LSTM model performance per epochs in seizure detection (a) accuracy (b) model loss.

    The proposed model has demonstrated its highest accuracy at 99.48% when employing a learning rate of 0.0001, with identical precision, recall, and F1 values. Subsequently, an ablation study was conducted, reinforcing the optimal performance of the model with these parameters. Our exploration extended to testing the model with various optimizers, including Adam, RMSprop (Root Mean Square Propagation), Adagrad, SGD (stochastic gradient descent), and Adadelta. Notably, the Adam optimizer yielded the most robust performance. Further experimentation involved assessing the model at different epochs, specifically 80 and 100 epochs. Notably, the model achieved its peak accuracy at 100 epochs, highlighting the significance of an extended training duration. This observation underscores that our model requires a sufficient number of epochs to converge effectively, with lower epoch counts resulting in suboptimal accuracy. Table 2 provides a comprehensive overview of the model's overall performance across these varied configurations, consolidating the results of our meticulous testing and optimization efforts.

    Table 2.  Ablation study of proposed 1D LSTM-CNN model.
    Optimizer Learning Rate Epoch Accuracy Precision Recall F1 Loss Time (s)
    Adam 0.00001 100 96.07% 99.86% 92.26% 95.91% 0.1336 685.75
    0.0001 100 99.48% 99.48% 99.48% 99.48% 0.0238 666.25
    0.001 100 99.26% 98.75% 99.75% 99.20% 0.0281 686.07
    0.00001 80 96.98% 99.27% 94.65% 96.91% 0.0946 543.87
    0.0001 80 99.11% 99.04% 99.17% 99.11% 0.0353 566.51
    0.001 80 99.37% 99.01% 99.01% 99.37% 0.0293 523.45

    RMSprop 0.00001 100 97.15% 97.15% 95.22% 97.10% 0.0799 690.47
    0.0001 100 99.24% 99.17% 99.30% 99.24% 0.0327 685.52
    0.001 100 99.04% 98.58% 99.52% 99.05% 0.0576 688.97
    0.00001 80 93.78% 99.51% 88.00% 93.40% 0.1913 566.29
    0.0001 80 99.26% 99.56% 98.96% 99.26% 0.0363 565.40
    0.001 80 99.11% 98.66% 99.57% 99.11% 0.0394 506.02

    Adagrad 0.00001 100 88.28% 86.10% 91.30% 88.63% 0.6591 565.26
    0.0001 100 93.22% 98.54% 87.74% 92.82% 0.2212 646.98
    0.001 100 97.43% 99.37% 95.48% 97.38% 0.0736 696.80
    0.00001 80 85.24% 83.62% 87.65% 85.59% 0.6633 565.28
    0.0001 80 93.02% 97.23% 88.57% 92.70% 0.2323 565.30
    0.001 80 94.85% 99.66% 90.00% 94.59% 0.1502 565.33

    SGD 0.00001 100 68.83% 62.48% 94.26% 75.15% 0.6707 745.99
    0.0001 100 92.41% 96.04% 88.48% 92.10% 0.4171 641.75
    0.001 100 94.35% 99.04% 89.57% 94.06% 0.1510 745.80
    0.00001 80 87.24% 91.80% 81.78% 86.50% 0.6691 513.58
    0.0001 80 92.57% 93.55% 91.43% 92.48% 0.4202 565.24
    0.001 80 96.70% 98.12% 95.22% 96.65% 0.0892 565.70

    Adadelta 0.00001 100 57.72% 58.29% 96.30% 72.62% 0.6737 746.19
    0.0001 100 90.26% 94.61% 85.39% 89.76% 0.5800 806.64
    0.001 100 92.50% 99.05% 85.83% 91.96% 0.1993 702.47
    0.00001 80 58.28% 54.62% 97.96% 70.13% 0.6756 626.37
    0.0001 80 91.30% 94.39% 87.83% 90.99% 0.5647 565.62
    0.001 80 92.28% 97.88% 86.43% 91.80% 0.2035 626.72

     | Show Table
    DownLoad: CSV

    In our study, we employed various classifiers to comprehensively evaluate model performance. Notably, the application of the GRU (Gated Recurrent Unit) model yielded an accuracy of 94.09%. Subsequently, the CNN model demonstrated exceptional performance with an accuracy rate of 99.33%. The proposed model, in particular, showcased remarkable efficacy when compared to other models considered in the study. Following our exploration of neural network architectures, we transitioned to traditional machine learning models. However, the performance of these models was not as robust. The logistic regression model, for instance, achieved an accuracy of 64.30%, indicating a comparatively lower performance. In stark contrast, our proposed model continued to exhibit superior accuracy, reaching an impressive 99.48%. These results underscore the efficacy of our proposed model, highlighting its substantial advantages over both neural network and traditional machine learning counterparts in the context of the studied task. In the Table 3 the comparison of model performances is shown between different classifiers.

    Table 3.  Comparing the proposed model with other machine learning models.
    Model Accuracy Precision Recall F1 Loss Time (s)
    GRU 94.09% 94.36% 94.36% 94.07% 0.2138 59.13
    CNN 99.33% 98.88% 99.78% 99.33% 0.0469 624.32
    DNN 93.11% 94.20% 91.87% 93.02% 0.2116 143.32
    Logistic Regression 64.85% 65.85% 65.85% 64.29% 0.6726 2.07
    Linear SVC 64.30% 65.39% 64.30% 63.66% 0.9528 10.17
    Decision Tree 92.35% 92.35% 92.35% 92.35% 2.7581 3.33
    Random Forest 97.54% 97.57% 97.54% 97.54% 0.1086 17.20
    Gradient Boosting 95.33% 95.34% 95.33% 95.33% 0.1365 55.09
    MLP 98.43% 98.44% 98.43% 98.43% 0.0898 16.8
    NeuroWave-Net (Proposed) 99.48% 99.48% 99.48% 99.48% 0.0238 666.25

     | Show Table
    DownLoad: CSV

    Numerous studies have delved into the intricate domain of seizure prediction, employing a spectrum of machine learning and deep learning techniques. Notably, the contemporary preference for EEG datasets underscores a collective shift toward leveraging brain signal data over other forms of neuroimaging. A case in point is the study conducted by Zhao et al. [11], which, intriguingly, utilized a dataset akin to ours. While their application of a CNN model resulted in a commendable accuracy of 96.97%, there remains substantial room for elevating model performance. The work by Nishad et al. [12] in 2020 showcased a distinct approach, achieving an impressive accuracy of 99% through the utilization of an RF model. Equally groundbreaking is the study by Hemachandira et al. [14], emphasizing a paradigm shift toward feature optimization. Their hybrid model, integrating SVM, yielded an accuracy of 98%, signifying a revolutionary stride in the landscape of brain signal data analysis. This underscores a collective industry focus on feature grouping and optimization strategies, demonstrating their efficacy in achieving superior outcomes. This study [34] presents MP-SeizNet (A multi-path CNN Bi-LSTM Network), a novel deep learning network for seizure-type classification, utilizing both CNN and Bi-LSTM with attention. Assessed on the Temple University Hospital EEG Seizure Corpus, the model attains notable F1-scores of 87.6% for patient data and 98.1% for seizure data.

    Building upon these advancements, our study introduces a novel 1D CNN-LSTM hybrid model. The strategic amalgamation of CNN for robust feature extraction and LSTM for precise classification culminated in a noteworthy accuracy of 99.48%. This achievement not only surpasses the benchmarks set by prior works but also underscores the effectiveness of our proposed approach in the context of seizure prediction.

    The implications of our model extend into practical healthcare integration. With seamless adaptability into healthcare systems, our model emerges as a potent diagnostic tool for seizure diseases, contributing significantly to the realm of e-healthcare. Its global applicability positions it as a valuable asset for the classification of seizure diseases from human brain signals, symbolizing a substantial advancement in the landscape of neurohealth diagnostics. Our work marks a significant advancement in seizure classification models, showcasing the potential of deep learning techniques. The envisioned expansion of our dataset and the integration of federated learning in future iterations represents crucial steps toward realizing a transformative impact in the field of real-time seizure diagnosis. The comparison of our model's performance to that of previously published research is presented in Table 4.

    Table 4.  Comparison of our model performance with existing study.
    Study Year Classification Approach Accuracy (%)
    [11] 2020 CNN 96.97
    [12] 2020 RF 99.00
    [35] 2021 KST-Adaboost 98.50
    [36] 2022 FNR 96.67
    [14] 2022 PSO-SVM 98.00
    [34] 2023 MP-SeizNet 87.6
    Proposed Model - NeuroWave-Net 99.48

     | Show Table
    DownLoad: CSV

    Our research endeavors culminate in a groundbreaking outcome with the introduction of a 1D CNN-LSTM model, poised to revolutionize the landscape of seizure detection. In response to the common health concern of seizures, our innovative model achieves an exceptional accuracy rate of 99.48%. This robust performance signifies a significant leap forward compared to conventional machine learning models, emphasizing the potential of advanced neural network architectures in the realm of neurology. By delving into the intricacies of brain signal data, our NeuroWave-Net model not only enhances accuracy but also showcases the adaptability of cutting-edge technologies in addressing critical healthcare challenges. This outcome is a testament to the potency of bridging artificial intelligence (AI) and medical research to create tools that can significantly impact patient care. To address this limitation and fortify the generalizability of our model, future endeavors will focus on data expansion through collaborations with various hospitals. This strategic approach not only ensures a more diverse dataset but also enhances the robustness of our model across different patient populations.

    In the future, the integration of our model into the medical workflow will stand out as a crucial milestone. We envision seamlessly incorporating our 1D CNN-LSTM model into clinical practices, ensuring its accessibility and usability by healthcare professionals. This integration will involve close collaboration with medical institutions, the development of user-friendly interfaces, and adherence to regulatory standards. Moreover, our ongoing exploration of federated learning holds promise for creating a brain computer interface app capable of rapid seizure detection. By sharing knowledge across multiple healthcare facilities, we aspire to develop a tool that transcends geographical boundaries and becomes an invaluable asset in clinical settings. In essence, our research not only addresses current challenges but also sets the stage for a future where advanced AI models seamlessly integrate into medical workflows, enhancing patient outcomes and advancing the field of neurology.

    This research was funded by Taif University, Saudi Arabia, project number (TU-DSPP-2024-04).

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.


    Acknowledgments



    The authors extend their appreciation to Taif University, Saudi Arabia, for supporting this work through project number (TU-DSPP-2024-04). Also we extend our sincere gratitude to the ICT Division, Ministry of Post, Telecommunication, and Information Technology, Government of Bangladesh for their support, enabling the research conducted under the ICT fellowship program.

    Conflict of interest



    The authors have no conflict of interest.

    [1] Fisher RS, van Emde Boas W, Blume W, et al. (2005) Response: Definitions proposed by the international league against epilepsy (ILAE) and the international bureau for epilepsy (IBE). Epilepsia 46: 1701-1702. https://doi.org/10.1111/j.1528-1167.2005.00273_4.x
    [2] Juan E, Górska U, Kozma C, et al. (2005) Distinct signatures of loss of consciousness in focal impaired awareness versus tonic-clonic seizures. Brain 146: 109-123. https://doi.org/10.1093/brain/awac291
    [3] Schwartz PJ, Ackerman MJ, Antzelevitch C, et al. (2020) Inherited cardiac arrhythmias. Nat Rev Dis Primers 6: 58. https://doi.org/10.1038/s41572-020-0188-7
    [4] Lemoine É, Toffa D, Pelletier-Mc DG, et al. (2023) Machine-learning for the prediction of one-year seizure recurrence based on routine electroencephalography. Sci Rep 13: 12650. https://doi.org/10.1038/s41598-023-39799-8
    [5] McKee JL, Kaufman MC, Gonzalez AK, et al. (2023) Leveraging electronic medical record-embedded standardised electroencephalogram reporting to develop neonatal seizure prediction models: A retrospective cohort study. Lancet Digit Health 5: e217-e226. https://doi.org/10.1016/S2589-7500(23)00004-3
    [6] Pinto MF, Batista J, Leal A, et al. (2023) The goal of explaining black boxes in eeg seizure prediction is not to explain models' decisions. Epilepsia Open 8: 285-297. https://doi.org/10.1002/epi4.12748
    [7] Khare SK, Khan AM, Bajaj V, et al. (2023) Introduction to smart healthcare and the role of cognitive sensors. Cognitive Sensors. UK: IOP Publishing Bristol 1-21.
    [8] Hernandez-Pavon JC, Veniero D, Bergmann TO, et al. (2023) TMS combined with EEG: Recommendations and open issues for data collection and analysis. Brain Stimul 6: 567-593. https://doi.org/10.1016/j.brs.2023.02.009
    [9] Chiarion G, Sparacino L, Antonacci Y, et al. (2023) Connectivity analysis in EEG data: A tutorial review of the state of the art and emerging trends. Bioeng 10: 372. https://doi.org/10.3390/bioengineering10030372
    [10] López-Arango G, Deguire F, Agbogba K, et al. (2023) Impact of macrocephaly, as an isolated trait, on EEG signal as measured by spectral power and multiscale entropy during the first year of life. Dev Neurosci 45: 210-222. https://doi.org/10.1159/000529722
    [11] Zhao W, Zhao WB, Wang WF, et al. (2020) A novel deep neural network for robust detection of seizures using eeg signals. Comput Math Method M 2020: 9689821. https://doi.org/10.1155/2020/9689821
    [12] Nishad A, Pachori RB (2020) Classification of epileptic electroencephalogram signals using tunable-Q wavelet transform based filter-bank. J Amb Intel Hum Comp 15: 877-891. https://doi.org/10.1007/s12652-020-01722-8
    [13] Nithya K, Sharma S, Sharma RR (2023) Eigenvalues of hankel matrix based epilepsy detection using EEG signals. In 2023 2nd International Conference on Paradigm Shifts in Communications Embedded Systems, Machine Learning and Signal Processing (PCEMS). New York: IEEE 1-6.
    [14] Hemachandira VS, Viswanathan R (2022) A framework on performance analysis of mathematical model-based classifiers in detection of epileptic seizure from EEG signals with efficient feature selection. J Healthc Eng 2022: 7654666. https://doi.org/10.1155/2022/7654666
    [15] Cao JW, Hu DH, Wang YM, et al. (2021) Epileptic classification with deep-transfer-learning-based feature fusion algorithm. IEEE T Cogn Dev Syst 14: 684-695. https://doi.org/10.1109/TCDS.2021.3064228
    [16] Wang YP, Dai Y, Liu ZM, et al. (2021) Computer-aided intracranial EEG signal identification method based on a multi-branch deep learning fusion model and clinical validation. Brain Sci 11: 615. https://doi.org/10.3390/brainsci11050615
    [17] Liu Y, Huang YX, Zhang XX, et al. (2020) Deep C-LSTM neural network for epileptic seizure and tumor detection using high-dimension EEG signals. IEEE Access 8: 37495-37504. https://doi.org/10.1109/ACCESS.2020.2976156
    [18] Beeraka SM, Kumar A, Sameer M, et al. (2022) Accuracy enhancement of epileptic seizure detection: A deep learning approach with hardware realization of STFT. Circ Syst Signal Pr 41: 461-484. https://doi.org/10.1007/s00034-021-01789-4
    [19] Ryu S, Joe I (2021) A hybrid DenseNET-LSTM model for epileptic seizure prediction. Appl Sci 11: 7661. https://doi.org/10.3390/app11167661
    [20] Srivastava A, Singh A, Tiwari AK (2022) An efficient hybrid approach for the prediction of epilepsy using CNN with LSTM. Int J Artif Intell Soft Comput 7: 179-193. https://doi.org/10.1504/IJAISC.2022.126336
    [21] Wang XH, Gong GH, Li N (2019) Detection analysis of epileptic EEG using a novel random forest model combined with grid search optimization. Front Hum Neurosci 13: 52. https://doi.org/10.3389/fnhum.2019.00052
    [22] Sharma RR, Pachori RB (2018) Time–frequency representation using IEVDHM-HT with application to classification of epileptic EEG signals. Iet Sci Meas Technol 12: 72-82. https://doi.org/10.1049/iet-smt.2017.0058
    [23] Sharma RR, Varshney P, Pachori RB, et al. (2018) Automated system for epileptic EEG detection using iterative filtering. IEEE Sensors Letters 2: 1-4. https://doi.org/10.1109/LSENS.2018.2882622
    [24] San-Segundo R, Gil-Martin M, D'Haro-Enrquez LF, et al. (2019) Classification of epileptic eeg recordings using signal transforms and convolutional neural networks. Comput Biol Med 109: 148-158. https://doi.org/10.1016/j.compbiomed.2019.04.031
    [25] Andrzejak RG, Lehnertz K, Mormann F, et al. (2001) Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. Phy Rev E 64: 061907. https://doi.org/10.1103/PhysRevE.64.061907
    [26] Ahmed AA, Ali W, Abdullah TA, et al. (2023) Classifying cardiac arrhythmia from ECG signal using 1D CNN deep learning model. Mathematics 11: 562. https://doi.org/10.3390/math11030562
    [27] Xiong QS, Kong QZ, Xiong HB, et al. (2024) Physics-informed deep 1D CNN compiled in extended state space fusion for seismic response modeling. Comput Struct 291: 107215. https://doi.org/10.1016/j.compstruc.2023.107215
    [28] Moussavou Boussougou MK, Park DJ (2023) Attention-based 1D CNN-BILSTM hybrid model enhanced with fasttext word embedding for korean voice phishing detection. Mathematics 11: 3217. https://doi.org/10.3390/math11143217
    [29] Phukan N, Manikandan MS, Pachori RB (2023) Afibri-net: A lightweight convolution neural network based atrial fibrillation detector. IEEE T Circuits-1 70: 4962-4974. https://doi.org/10.1109/TCSI.2023.3303936
    [30] Hassan W, Joolee JB, Jeon S (2023) Establishing haptic texture attribute space and predicting haptic attributes from image features using 1D-CNN. Sci Rep 13: 11684. https://doi.org/10.1038/s41598-023-38929-6
    [31] Iyer A, Das SS, Teotia R (2023) CNN and LSTM based ensemble learning for human emotion recognition using EEG recordings. Multimed Tools Appl 82: 4883-4896. https://doi.org/10.1007/s11042-022-12310-7
    [32] Li DK (2023) Multivariate time series prediction based on quantum enhanced LSTM models. Second International Conference on Electronic Information Technology (EIT 2023). USA: SPIE 491-497. https://doi.org/10.1117/12.2685468
    [33] Mohammed AYA, Yaw CT, Koh SP, et al. (2023) Detection of corona faults in switchgear by using 1D-CNN, LSTM, and 1D-CNN-LSTM methods. Sensors 23: 3108. https://doi.org/10.3390/s23063108
    [34] Albaqami H, Hassan GM, Datta A (2023) MP-seiznet: A multi-path cnn BI-LSTM network for seizure-type classification using EEG. Biomed Signal Proces 84: 104780. https://doi.org/10.1016/j.bspc.2023.104780
    [35] Shoeibi A, Ghassemi N, Alizadehsani R, et al. (2021) A comprehensive comparison of handcrafted features and convolutional autoencoders for epileptic seizures detection in EEG signals. Expert Syst Appl 163: 113788. https://doi.org/10.1016/j.eswa.2020.113788
    [36] Qureshi MB, Afzaal M, Qureshi MS, et al. (2022) Fuzzy-based automatic epileptic seizure detection framework. Comput Mater Contin 7: 5601-5630. https://doi.org/10.32604/cmc.2022.020348
  • This article has been cited by:

    1. Simona Moldovanu, Gigi Tăbăcaru, Marian Barbu, Convolutional Neural Network–Machine Learning Model: Hybrid Model for Meningioma Tumour and Healthy Brain Classification, 2024, 10, 2313-433X, 235, 10.3390/jimaging10090235
    2. Priyanka Roy, Fahim Mohammad Sadique Srijon, Pankaj Bhowmik, Jyotir Moy Chatterjee, An explainable ensemble approach for advanced brain tumor classification applying Dual-GAN mechanism and feature extraction techniques over highly imbalanced data, 2024, 19, 1932-6203, e0310748, 10.1371/journal.pone.0310748
    3. Zhuoli He, Chuansheng Wang, Jiayan Huang, Antoni Grau, Edmundo Guerra, Jiaquan Yan, 2024, On-Device Learning with Raspberry Pi for GCN-Based Epilepsy EEG Classification, 979-8-3503-8622-6, 5691, 10.1109/BIBM62325.2024.10821948
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2076) PDF downloads(85) Cited by(3)

Figures and Tables

Figures(14)  /  Tables(5)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog