A membrane protein's functions are significantly associated with its type, so it is crucial to identify the types of membrane proteins. Conventional computational methods for identifying the species of membrane proteins tend to ignore two issues: High-order correlation among membrane proteins and the scenarios of multi-modal representations of membrane proteins, which leads to information loss. To tackle those two issues, we proposed a deep residual hypergraph neural network (DRHGNN), which enhances the hypergraph neural network (HGNN) with initial residual and identity mapping in this paper. We carried out extensive experiments on four benchmark datasets of membrane proteins. In the meantime, we compared the DRHGNN with recently developed advanced methods. Experimental results showed the better performance of DRHGNN on the membrane protein classification task on four datasets. Experiments also showed that DRHGNN can handle the over-smoothing issue with the increase of the number of model layers compared with HGNN. The code is available at https://github.com/yunfighting/Identification-of-Membrane-Protein-Types-via-deep-residual-hypergraph-neural-network.
Citation: Jiyun Shen, Yiyi Xia, Yiming Lu, Weizhong Lu, Meiling Qian, Hongjie Wu, Qiming Fu, Jing Chen. Identification of membrane protein types via deep residual hypergraph neural network[J]. Mathematical Biosciences and Engineering, 2023, 20(11): 20188-20212. doi: 10.3934/mbe.2023894
Related Papers:
[1]
Sathyanarayanan Gopalakrishnan, Swaminathan Venkatraman .
Prediction of influential proteins and enzymes of certain diseases using a directed unimodular hypergraph. Mathematical Biosciences and Engineering, 2024, 21(1): 325-345.
doi: 10.3934/mbe.2024015
[2]
Boyang Wang, Wenyu Zhang .
MARnet: multi-scale adaptive residual neural network for chest X-ray images recognition of lung diseases. Mathematical Biosciences and Engineering, 2022, 19(1): 331-350.
doi: 10.3934/mbe.2022017
[3]
Feng Wang, Xiaochen Feng, Ren Kong, Shan Chang .
Generating new protein sequences by using dense network and attention mechanism. Mathematical Biosciences and Engineering, 2023, 20(2): 4178-4197.
doi: 10.3934/mbe.2023195
[4]
Boyang Wang, Wenyu Zhang .
ACRnet: Adaptive Cross-transfer Residual neural network for chest X-ray images discrimination of the cardiothoracic diseases. Mathematical Biosciences and Engineering, 2022, 19(7): 6841-6859.
doi: 10.3934/mbe.2022322
[5]
Yongyin Han, Maolin Liu, Zhixiao Wang .
Key protein identification by integrating protein complex information and multi-biological features. Mathematical Biosciences and Engineering, 2023, 20(10): 18191-18206.
doi: 10.3934/mbe.2023808
[6]
Shun Li, Lu Yuan, Yuming Ma, Yihui Liu .
WG-ICRN: Protein 8-state secondary structure prediction based on Wasserstein generative adversarial networks and residual networks with Inception modules. Mathematical Biosciences and Engineering, 2023, 20(5): 7721-7737.
doi: 10.3934/mbe.2023333
[7]
Shuai Cao, Biao Song .
Visual attentional-driven deep learning method for flower recognition. Mathematical Biosciences and Engineering, 2021, 18(3): 1981-1991.
doi: 10.3934/mbe.2021103
[8]
Jinmiao Song, Shengwei Tian, Long Yu, Qimeng Yang, Qiguo Dai, Yuanxu Wang, Weidong Wu, Xiaodong Duan .
RLF-LPI: An ensemble learning framework using sequence information for predicting lncRNA-protein interaction based on AE-ResLSTM and fuzzy decision. Mathematical Biosciences and Engineering, 2022, 19(5): 4749-4764.
doi: 10.3934/mbe.2022222
[9]
Peter Hinow, Edward A. Rietman, Sara Ibrahim Omar, Jack A. Tuszyński .
Algebraic and topological indices of molecular pathway networks in human cancers. Mathematical Biosciences and Engineering, 2015, 12(6): 1289-1302.
doi: 10.3934/mbe.2015.12.1289
A membrane protein's functions are significantly associated with its type, so it is crucial to identify the types of membrane proteins. Conventional computational methods for identifying the species of membrane proteins tend to ignore two issues: High-order correlation among membrane proteins and the scenarios of multi-modal representations of membrane proteins, which leads to information loss. To tackle those two issues, we proposed a deep residual hypergraph neural network (DRHGNN), which enhances the hypergraph neural network (HGNN) with initial residual and identity mapping in this paper. We carried out extensive experiments on four benchmark datasets of membrane proteins. In the meantime, we compared the DRHGNN with recently developed advanced methods. Experimental results showed the better performance of DRHGNN on the membrane protein classification task on four datasets. Experiments also showed that DRHGNN can handle the over-smoothing issue with the increase of the number of model layers compared with HGNN. The code is available at https://github.com/yunfighting/Identification-of-Membrane-Protein-Types-via-deep-residual-hypergraph-neural-network.
1.
Introduction
The proteins contained in the biological membrane are called membrane proteins, which play a lead role in maintaining many life activities, including but not limited to cell proliferation and differentiation, energy transformation, signal transduction and material transportation. As we know, a membrane protein's functions are significantly associated with its type, so it is important to identify the types of membrane proteins [1]. Membrane proteins can be grouped into eight types [2]: Single-span type 1, single-span type 2, single-span type 3, single-span type 4, multi-span, lipid-anchor, glycosylphosphatidylinositol (GPI)-anchor and peripheral.
There exists many different computational methods that can be used for identifying the types of proteins. Chou and Elrod [3] used the covariant discriminant algorithm (CDA) according to amino acid composition (AAC) to predict membrane protein types. To address the challenge posed by the large number of possible patterns in protein sequences, Chou [4] introduced a pseudo-amino acid composition (PAAC). This composition combines a set of discrete sequence correlation factors with the 20 components of the traditional amino acid composition. Wang et al. [5] utilized the pseudo amino acid composition to incorporate sequence-order effects and introduced spectral analysis for representing the statistical sample of a protein. The weighted support vector machine (SVM) algorithm was applied. Liu et al. [6] introduced the low-frequency Fourier spectrum analysis based on the concept of PAAC, which effectively incorporates sequence patterns into discrete components and enables existing prediction algorithms to be applied directly to protein samples. Chou and Shen [7] developed a two-layer predictor for classifying proteins as membrane or non-membrane. If a protein is classified as membrane, the process continues with a second-layer prediction engine to determine its specific type from eight categories. The predictor stands out for its incorporation of evolutionary information through pseudo position-specific score matrix (Pse-PSSM) vectors and its ensemble classifier consisting of multiple optimized evidence-theoretic K-nearest neighbor (OET-KNN) classifiers. Rezaei et al. [8] classified membrane proteins by applying wavelet analysis to their sequences and extracting informative features. These features were normalized and used as input for a cascaded model, which aimed to mitigate bias caused by differences in membrane protein class sizes in the dataset. Wang et al. [9] utilized the dipeptide composition (DC) method to represent proteins as high-dimensional feature vectors. They introduced the neighborhood preserving embedding (NPE) algorithm for linear dimensionality reduction and to extract essential features from the high-dimensional DC space. The reduced low-dimensional features were then employed with the K-nearest neighbor (K-NN) classifier to accurately classify membrane protein types. Hayat and Khan [10] integrated composite protein sequence features (CPSR) with the PAAC to classify membrane protein. They further proposed using split amino acid composition (SAAC) and ensemble classification [11] and still further fused position specific scoring matrix (PSSM) and SAAC [12] to classify membrane protein. Chen and Li [13] introduced a novel computational classifier designed for the prediction of membrane protein types using protein sequences. The classifier was constructed based on a collection of one-versus-one SVMs and incorporated various sequence attributes. Han et al. [14] integrated amino acid classifications and physicochemical properties in PAAC and used a two-stage multiclass SVM to classify membrane protein. Wan et al. [15] retrieved the associated gene ontology (GO) information of a query membrane protein by searching a compact GO-term database with its homologous accession number. Subsequently, they employed a multi-label elastic net (EN) classifier to classify the membrane protein based on this information. Lu et al. [16] used a dynamic deep network architecture that was based on lifelong learning for the classification of membrane protein. Wang et al. [17] introduced a new support bio-sequence machine, which used SVM for protein classification.
In conclusion, most of the above models used different computational methods to represent membrane proteins and then used classification algorithms to identify membrane protein types. Most of the models mentioned above have varied types of feature input formats, which are shown in Table 1.
Table 1.
Varied types of feature input formats of different methods.
However, it is noted that more than two proteins are linked by non-covalent interactions [18,19] in real practice, and the representation of proteins is multi-modal. Traditional computational methods for identifying membrane protein types tend to ignore those two issues, which leads to information loss since the high-order correlation among membrane proteins and the scenarios of multi-modal representations of membrane proteins are ignored.
To tackle those problems, in this paper we use a deep residual hypergraph neural network (DRHGNN) [20] to further learn about the representations of membrane proteins and to eventually achieve accurate identification of membrane proteins' types.
First, each membrane protein is represented by the extracted features. Here, five feature extraction methods are employed based on the PSSM of membrane protein sequence [2], including average blocks (AvBlock), discrete cosine transform (DCT), discrete wavelet transform (DWT), histogram of oriented gradient (HOG) and PsePSSM. Five types of features are extracted accordingly. Second, each feature type generates a hypergraph G represented by an incidence matrix H modeling complex high-order correlation. Five types of features and corresponding incidence matrix H are concatenated, respectively, which overcomes the scenarios of multi-modal representations of membrane proteins. Lastly, concatenated features and fused incidence matrix are input into a DRHGNN to classify the various types of membrane proteins. To assess the performance of DRHGNN, we perform tests on membrane proteins' four distinct datasets. In the task of membrane proteins classification, the model achieves better performance.
2.
Materials and methods
In order to extract features of membrane proteins, we employ AvBlock, DCT, DWT, HOG and PsePSSM [2] to achieve feature extraction based on membrane protein sequence's PSSM. Each type of PSSM-based feature is used to generate a hypergraph that can be represented by an incidence matrix H, then five types of features and their corresponding H are concatenated, respectively, and both are fed into a DRHGNN [20,21,22] to identify the types of membrane proteins. Figure 1 depicts the schematic diagram.
Figure 1.
The schematic diagram of our proposed method.
We judge the performance of DRHGNN on the classification of membrane proteins based on four datasets, namely, Dataset 1, Dataset 2, Dataset 3 and Dataset 4.
Dataset 1 is directly sourced from Chou's work [7], where protein sequences are sourced from the Swiss-Prot [23] database. Chou and Shen [7] employed a percentage distribution method to randomly assign the protein sequences into both the training set and the testing set. This was done to ensure a balanced number of sequences between the two sets. Dataset 1 consists of 7582 membrane proteins from eight types and the same training/testing split as [7], where 3,249 membrane proteins are employed for training, with the remaining 4,333 employed for testing.
Dataset 2 was created by removing redundant and highly similar sequences from Dataset 1. This resulted in a curated dataset with reduced homology, specifically ensuring that no pair of proteins shared a sequence identity greater than 40%. The training set of Dataset 2 was obtained by removing redundant sequences from Dataset 1's training set. Similarly, the testing set of Dataset 2 was prepared by eliminating redundant sequences and those with high sequence identity to the training set. Dataset 2 consists of 4594 membrane proteins from eight types and the same training/testing split as [13], where 2288 membrane proteins are employed for training, with the remaining 2306 membrane proteins employed for testing.
To update and expand the datasets, Chen and Li [13] created Dataset 3 through the following steps. Initially, membrane protein sequences were obtained from the Swiss-Prot [23] database using the "protein subcellular localization" annotation. Stringent exclusion criteria was applied to ensure dataset quality: 1) Exclusion of fragmented proteins or those shorter than 50 amino acid residues; 2) removal of proteins with non-experimental qualifiers or multiple topologies in their annotations; 3) elimination of homologous sequences with a sequence identity greater than 40% using clustering database at high identity with tolerance (CD-hit) [24]. Subsequently, the sequences were categorized into their respective membrane protein types based on topology annotations. To generate the training and testing sets, a random assignment was performed employing the above-mentioned percentage distribution method. Consequently, Dataset 3 was created, providing an updated and expanded dataset of membrane protein sequences characterized by enhanced quality and classification. Dataset 3 consists of 6677 membrane proteins from eight types and the same training/testing split as [13], where 3,073 membrane proteins are employed for training and 3604 for testing.
Dataset 4 is directly sourced from Chou's work [3], where protein sequences are sourced from the Swiss-Prot [23] database. The training and testing sets were obtained after protein sequences were screened with three procedures. Dataset 4 consists of 4684 membrane proteins from five types and the same training/testing split as [3], where 2059 membrane proteins are used for training and 2625 membrane proteins are employed for testing. Table 2 outlines the details of the datasets.
Table 2.
The scale of training and testing samples in four different membrane proteins' datasets.
We use the same membrane protein features as [2], which are extracted with five methods based on the PSSM of membrane proteins.
2.2. PSSM
The PSSM is a widely used tool in the field of bioinformatics for capturing evolutionary information encoded within membrane protein sequences. It is generated through multiple sequence alignment and database searching methods, such as position-specific iterated BLAST (PSIBLAST) program [25], to identify conserved residues and their positional probabilities.
The evolutionary information obtained from the PSSM is preserved within a matrix of size R × 20 (R rows and 20 columns), presented as follows:
The numbers 1–20 denote one of the 20 different amino acids. R denotes the length of the membrane protein sequence.The element pi,j is calculated as follows:
pi,j=20∑k=1ω(i,k)×D(k,j);i=1,…,L,j=1,…,20.
(2.2)
ω(i,k) represents the frequency of the k-th amino acid type at position i, and D(k, j) denotes the value derived from Dayhoff's mutation matrix (substitution matrix) for the k-th and j-th amino acid types. The utilization of these variables in the equation aims to incorporate amino acid frequency information and substitution probabilities.
2.3. AvBlock
AvBlock refers to a statistical measure employed in professional scientific research to analyze data sequences. Nowadays, AvBlock is a widely adopted approach for constructing matrix descriptors to represent protein sequences [26]. AvBlock is calculated by dividing the total length of a sequence by the average length of its individual consecutive blocks. Here, the PSSM matrix is partitioned into 20 blocks along the rows. Subsequently, each block is transformed into a feature vector of dimensionality 20 for the PSSM matrix.
2.4. DCT
The DCT [27] is a mathematical transform widely used in signal and image processing. Here, we employ a two-dimensional DCT (2D-DCT) for compressing the PSSM of proteins. The mathematical definition for the 2D-DCT is
The DWT has been utilized to extract informative features from protein amino acid sequences, as initially introduced by Nanni et al. [28]. Here, we applied a 4-level DWT to preprocess the PSSM matrix. At each level we compute both the approximate and detailed coefficients for each column. We extract essential statistical features such as maximum, minimum, mean and standard deviation from both the approximate and detailed coefficients. Additionally, we capture the first five discrete cosine coefficients exclusively from the approximate coefficients. Therefore, for each of the 20 column dimensions, a total of 4+4+5 features are obtained at each level.
2.6. HOG
The HOG is a feature descriptor used in computer vision and image processing for object detection and recognition. Here, we propose a method to reduce redundancy in protein data using the HOG algorithm. We consider the PSSM as an image-like matrix representation. First, we compute the horizontal and vertical gradients of the PSSM to obtain the gradient magnitude and direction matrices. These matrices are then partitioned into 25 sub-matrices that incorporate both the gradient magnitude and direction information. Subsequently, we generate 10 distinct histogram channels for each sub-matrix based on its gradient direction. This approach effectively reduces redundancy by providing a compact representation of the protein data while preserving important spatial information.
2.7. PsePSSM
The PsePSSM is a commonly utilized matrix descriptor in protein research [7]. It is specifically designed to preserve the essential information contained in the PSSM by considering the incorporation of PAAC. The PsePSSM descriptor is formulated as follows:
FPsePSSM={1N∑Ni=1p′i,j;j=1,…,201N−lag∑N−lagi=1(p′i,j−p′i+lag,j)2;j=1,…,20, lag =1,…,30,
(2.6)
where lag refers to the distance between a residue and its neighboring residues. The formula of p′i,j is
where p′i,j refers to the normalized version of pi,j.
2.8. DRHGNN
2.8.1. Hypergraph learning statement
In a basic graph, the samples are depicted as vertexes, and two connected vertexes are joined by an edge [29,30]. However, the data structure in practical applications may go beyond pair connections and may even be multi-modal. Accordingly, the hypergraph was proposed. Unlike the simple graph, a hypergraph comprises a vertex set and one or more hyperedge set(s) composed of two or more vertexes, as shown in Figure 2. A hypergraph is represented by G = (V, E, W), where V represents a vertex set, and E represents a hyperedge set. W, a diagonal matrix of edge weights, assigns weights to each hyperedge. The incidence matrix H, where H is the |V|×|E| incidence matrix with entries defined as
h(v,e)={1, if v∈e0, if v∉e,
(2.8)
Figure 2.
The comparison between graph and hypergraph.
Here, we could take the membrane protein classification task on the hypergraph because more than two proteins are linked by non-covalent interactions [18,19]. X=[x1,…,xN]T can represent the features of N membrane proteins data. The hyperedge is constructed using the Euclidean distance, which calculates the distance expressed with d(xi,xj) between two features. In the hyperedge construction, each vertex represents a membrane protein, and then one central vertex and its K neighbors represent each hyperedge. As a result, N hyperedges containing K+1 vertexes are generated. Here, more specifically, each time we select one vertex in the dataset as the centroid, we use K nearest neighbors in the selected feature space to generate one hyperedge, which includes the centroid itself, as illustrated in Figure 3. Thus, a hypergraph with N hyperedges is constructed with a single-modal representation of membrane proteins. The hypergraph is denoted by an incidence matrix H∈RNxN, with Nx(K+1) nonzero entries denoting v∈e while the others equal zero.
Figure 3.
The schematic diagram of hyperedge generation and hypergraph generation.
In the case of multi-modal representations of membrane proteins, each incidence matrix Hi is constructed according to each modality membrane representation. After all the incidence matrix Hi have been generated, these Hi can be concatenated to generate the incidence matrix H of a multi-modality hypergraph. Thus, a hypergraph is constructed with multi-modal representations of membrane proteins shown in Figure 3, so it is noted that the flexibility of hypergraph generation has great expansibility toward multi-modal features.
2.8.2. Hypergraph convolution
Feng et al. [31] first proposed the HGNN. They built a hyperedge convolution layer whose formulation is
X(1+1)=σ(D−1/2vHWD−1eHTD−1/2vX(1)Θ(1)),
(2.9)
where X(l)∈RNxC represents the hypergraph's signal at the lth layer with N nodes and C dimensional features, W is regarded as the weight of all hyperedges and W=diag(w1,…,wN).Θ(l)∈RC1xC2 represents the parameter that is learned during the training process at the lth layer. σ represents the nonlinear activation function. Dv is the vertex degrees' diagonal matrix, while De is the edge degrees' diagonal matrix [30].
We define hypergraph Laplacian ˜H=D−1/2vHWD−1eHTD−1/2v, then a hyperedge convolution layer is formulated as X(l+1)=σ(˜HX(l)Θ(l)).
A hyperedge convolution layer achieves node-edge-node transform, which can refine a better representation of nodes and extract the high-order correlation from a hypergraph more efficiently.
2.8.3. Residual hypergraph convolution
Feng et al. [31] used two hyperedge convolution layers and then used the softmax function to obtain predicted labels. However, the performance of HGNN drops as the number of layers increases because of the over-smoothing issue.
To resolve the issue of over-smoothing, Huang et al. [20] and Chen et al. [22] used two simple and effective techniques, Initial residual and identity mapping, based on their shallow model. Inspired by their method, we upgrade the HGNN by introducing initial residual and identity mapping to prevent over-smoothing and enjoy accuracy increase from increased depth.
● Initial residual
Chen et al. [22] constructed a connection to the initial representation X(0) to relieve the over-smoothing problem. The initial residual connection guarantees that each node's final representation retains at least a proportion of the input feature regardless of how many layers we stack.
Gasteiger et al. [32] proposed approximate personalized propagation of neural predictions (APPNP), which employed a linear combination between different layers to the initial residual connection and gathered information from multi-hop neighbors instead of expanding the number of neural network layers by separating feature transformation and propagation. Formally, APPNP's model is defined as
X(l+1)=σ(((1−αl)˜HX(l)+αlX(0))Θ(l)).
(2.10)
In practice, we can set αl=0.1or0.2.
● Identity mapping
However, APPNP remains a shallow model; thus, the initial residual alone cannot extend HGNN to a deep model. To resolve this issue, Chen et al. [22] added an identity matrix IN to the weight matrix Θ(l) according to the idea in ResNet of identity mapping, which ensures the DRHGNN model performs at least as well as its shallow version does.
Finally, a residual enhanced hyperedge convolution layer is formulated as
In practice, we set βl=λl, where λ is a hyperparameter.
2.8.4. DRHGNN analysis
Figure 4 illustrates the detail of the DRHGNN. Those multi-types of node features and corresponding incidence matrix H modeling complex high-order correlation are concatenated, respectively, which overcomes the scenarios of multi-modal representations of membrane proteins. Then, concatenated features and incidence matrix are fed into DRHGNN to get nodes output labels and eventually achieve classification task. As detailed in the section mentioned above, we can build a residual enhanced hypergraph convolution layer, then we naively stack multiple residual hypergraph convolution blocks to tackle the problem of over-smoothing in HGNN and enjoy an accuracy increase. Additional linear transforms are incorporated into the model's first and last layer, and the residual hypergraph convolutions are utilized for information propagation. The deep embeddings are finally used for classification tasks.
Figure 4.
The DRHGNN framework. FC represents a fully connected layer.
The DRHGNN has numerous hyperparameters. Instead of comparing all the possible hyperparameters, which usually takes several days, we used empirically based hyperparameters, which are shown in Table 3.
The baseline results were replicated by their release codes, with hyperparameters adhering to the respective papers.
3.2. Metrics
We conducted accuracy calculations for predicting every type of membrane protein. We used accuracy (ACC), which measures the ratio of correctly predicted proteins to the total number of proteins in a specified dataset, to assess the performance of our model. The specific formula is
ACC=nN,
(3.1)
where n stands for the number of proteins that are correctly predicted in a specified dataset, and N stands for the total number of proteins present in the dataset.
In order to further evaluate the performance of models, we also incorporated F1-score and Mathew's correlation coefficient (MCC) as evaluation metrics.
The F1-score is a useful metric for addressing the issue of imbalanced datasets, which is composed of precision and recall. Precision refers to the ratio of the number of correctly predicted samples to the total number of samples predicted as positive, while recall refers to the ratio of the number of correctly predicted samples to the total number of actual positive samples. The best value of F1-score is 1, while the worst value is 0. Their specific formulas are
where TP, TN, FP and FN are true positive, true negative, false positive and false negative, respectively.
In order to comprehensively evaluate the F1-scores of multiple classes, we employed the macro average of F1-score, which aggregates the F1-score for different classes by taking their average with equal weights assigned to all classes.
MCC is widely acknowledged as a superior performance metric for the classification of imbalanced data. It is defined within the range of [−1,1], where a value of 1 indicates that the classifier accurately predicts all positive instances and negative instances, while a value of −1 signifies that the classifier incorrectly predicts all instances. The specific formula is
MCC(i)=TP×TN−FP×FN√[TP+FP][TP+FN][TN+FP][TN+FN].
(3.5)
The overall MCC for all categories is computed by averaging the MCC values of individual categories.
3.3. The selection of K value when constructing the hypergraph
The selection of K neighbors plays a vital role in the construction process of the hyperedge, as it has a significant impact on the model's performance. The selection of K value is performed by training the model with different K values and evaluating its performance. The optimal K value is determined based on the performance metric obtained from the validation set. We performed K value experiments on four datasets using DRHGNN. The performance metric is macro average of the F1-score. As observed from Table 4, each dataset achieves the best experimental result at different K values, specifically K=8, 10, 12 and 2 respectively. Therefore, when conducting experiments on the four datasets, we selected K values in sequence as 8, 10, 12 and 2.
Table 4.
The performance of DRHGNN with different K values on four datasets. The best result for each dataset is bolded.
3.4. Performance comparison of DRHGNN and HGNN with different layers
The performance of DRHGNN against HGNN with different layers on four datasets is reported in Table 5. Columns 4–9 show the ACC, macro average of the F1-score between DRHGNN and HGNN with different layers on four datasets. For better comparison, we presented the results in Figure 5. By analyzing Table 5 and Figure 5, we can observe two points: 1) DRHGNN achieves much better performance than HGNN on four datasets with accuracy gains of 3.738, 3.903, 4.106, 1.028%, respectively, and with a macro average of F1-score gains of 15.306, 11.843, 13.887, 3.591%, respectively, with their optimal layer. 2) The residual enhanced model (DRHGNN) has stable performance, while the performance of HGNN deteriorates as the number of layers increases. The potential reason for HGNN's performance degradation with increasing layer depth is that the model may suffer from an over-smoothing issue. The performance of DRHGNN persistently improves and achieves the best accuracy on four datasets at layer 4, 8, 4 and 8, respectively, and the best macro average of the F1-score on four datasets at layer 4, 8, 4 and 16, respectively.
Table 5.
Comparison of the ACC, macro average of F1-score between DRHGNN and HGNN with different depths on four datasets. The best result of methods for each dataset is bolded.
Figure 5.
The performance comparison of DRHGNN and HGNN with different layers on membrane protein classification task. (a) The performance comparison of DRHGNN and HGNN on Dataset 1; (b) The performance comparison of DRHGNN and HGNN on Dataset 2; (c) The performance comparison of DRHGNN and HGNN on Dataset 3; (d) The performance comparison of DRHGNN and HGNN on Dataset 4.
3.5. Performance comparison with multiple recently developed advanced methods
The summaries of classification accuracy results of DRHGNN with multiple recently developed advanced methods are shown in Tables 6–Table 9. Tables 6–Table 8 present a comparison of the accuracy of each type of membrane protein and the overall accuracy across all membrane proteins for Dataset 1, Dataset 2, and Dataset 3 using different methods. As Tables 6–Table 8 show, the accuracy of each type of membrane protein obtained using our method is generally higher than those achieved by other methods, and the overall accuracy is also superior to that of other methods. More specifically, compared with the MemType-2L [7] and hypergraph neural network [34] on Dataset 1, DRHGNN achieves overall accuracy gains of 2.63 and 3.738%, respectively. Compared with the MemType-2L [7] and hypergraph neural network [34] on Dataset 2, DRHGNN achieves overall accuracy gains of 5.507 and 3.903%, respectively. Compared with the MemType-2L [7] and hypergraph neural network [34] on Dataset 3, DRHGNN achieves overall accuracy gains of 11.128 and 4.106%, respectively. Furthermore, within these three datasets, the fifth type of membrane protein exhibits the highest accuracy compared to other types. This can potentially be attributed to the significantly larger number of samples available for the fifth type of membrane protein in these datasets. Table 9 presents a comparison of the overall accuracy between our proposed method and other methods on Dataset 4. As Table 9 shows, our method achieved the best performance among all the compared methods. More specifically, compared with CPSR [10] and two-stage SVM [14] on Dataset 4, DRHGNN achieves overall accuracy gains of 3.314 and 1.814%, respectively. Those results demonstrate the superior performance of DRHGNN on the membrane protein classification task. The detailed performance of DRHGNN on four datasets is shown in Table 10.
Table 6.
Comparison of the ACC between DRHGNN and multiple recent state of the art methods on Dataset 1. The best result among methods is bolded.
To further analyze the stability of DRHGNN compared to HGNN, we conducted an analysis by adjusting the training rate. All experiments were carried out with five different training rates followed by five distinct seeds. We then recorded the best results using the optimal number of layers in each experiment. Table 11 and Figure 6 show that DRHGNN consistently performs better than HGNN across all training rates, with around 1.5 to 5% overall accuracy enhancements and around 3.591 to 15.306% macro average of F1-score enhancements. This demonstrates the stability of DRHGNN performing better than HGNN with different ratios. In the meanwhile, DRHGNN shows stability, especially in small training rates and shows its better performance around original training rate.
Table 11.
Summaries of the ACC, macro average of F1-score of DRHGNN and HGNN with different training ratios.
Figure 6.
Stability analysis. The performance of DRHGNN and HGNN with different training ratios on membrane protein classification task. (a) The performance on Dataset 1; (b) The performance on Dataset 2; (c) The performance on Dataset 3; (d) The performance on Dataset 4.
We conducted an ablation study on initial residual and identity mapping. In Table 12, columns 4-9 show the accuracy and the macro average of the F1-score of four methods with different depths of the network layers on the four datasets. As Table 12 and Figure 7 show, HGNN using identify mapping can mitigate the problem of over-smoothing a little, and HGNN using initial residual can reduce the over-smoothing problem greatly. Meanwhile, adopting initial residual and identity mapping together can significantly improve performance while effectively reducing the over-smoothing problem. Furthermore, we found that the experimental results of HGNN adopting initial residual and identity mapping together and HGNN using initial residual are very close. However, HGNN adopting both outperforms in terms of accuracy and the macro average of the F1-score and reaches the best result faster than just adopting the initial residual.
Table 12.
Ablation study on initial residual and identity mapping. The best result of methods for each dataset is bolded.
Figure 7.
Ablation study on initial residual and identity mapping. The performance comparison of DRHGNN, HGNN, HGNN with initial residual, HGNN with identity mapping with different layers on membrane protein classification task. (a) The performance comparison on Dataset 1; (b) The performance comparison on Dataset 2; (c) The performance comparison on Dataset 3; (d) The performance comparison on Dataset 4.
This study proposed a DRHGNN enhanced with initial residual and identity mapping based on HGNN to further learn the representations of membrane proteins for identifying the types of membrane proteins.
First, the extracted features generated with five methods represented each membrane protein. Second, each incidence matrix Hi was constructed according to each modality membrane protein representation. Lastly, those multi-modals of membrane protein features and corresponding Hi were concatenated, respectively, and both were fed into the DRHGNN for the membrane protein classification task.
In those extensive experiments on membrane protein classification task, our method achieved a much better performance on four datasets.
DRHGNN resolves the following issues: The high-order correlation among membrane proteins and the scenarios of multi-modal representations of membrane proteins. In the meantime, DRHGNN can handle the over-smoothing issue as the number of model layers increases compared with HGNN.
However, we found three areas for improvement while doing experiments. One is that DRHGNN is quite sensitive to different datasets. Specifically, the performance of Dataset 4 is better than the performance of other datasets. The overall quantity of Dataset 4, the partitioning of training and testing sets on Dataset 4 and the distribution of a certain membrane protein class on Dataset 4 differ from the other datasets, which may significantly influence the training process and generalization capabilities of the model. Another is that we ignored the modification of hyperedge following with adjusted feature embedding in different layers. The model is still worth enhancing. The third one is that the hyperedges were constructed based on feature similarity, which may not directly represent physical interactions between the membrane proteins. Our approach should be considered as an approximation rather than a direct representation of interactions.
The main challenge for future research is to resolve three issues: DRHGNN's sensitivity to different datasets, modification of hyperedge following with adjusted feature embedding in different layers and capturing physical interactions among membrane proteins accurately.
In the meantime, the progress in interaction prediction research across diverse fields of computational biology holds great promise for gaining valuable insights into genetic markers and ncRNAs associated with membrane protein types, such as the prediction of miRNA-IncRNA interactions using a method based on the graph convolutional neural (GCN) network and the conditional random field (CRF) [35], gene function and protein association (GFPA) that extracts reliable associations between gene function and cell surface proteins from single-cell multimodal data [36], prediction of lncRNA-miRNA association using a network distance analysis model [37], prediction of the potential associations of disease-related metabolites using GCN with graph attention network [38], predicting Human ether-a-go-go-related gene (hERG) blockers using molecular fingerprints and graph attention mechanism [39] and predicting the potential associations between metabolites and diseases based on autoencoder and nonnegative matrix factorization [40]. These will also be our future research direction.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
This paper is supported by the National Natural Science Foundation of China (62372318, 61902272, 62073231, 62176175, 61876217, 61902271), the National Research Project (2020YFC2006602), the Provincial Key Laboratory for Computer Information Processing Technology, Soochow University (KJS2166) and the Opening Topic Fund of Big Data Intelligent Engineering Laboratory of Jiangsu Province (SDGC2157).
Conflict of interest
All authors declare no conflicts of interest in this paper.
X. Zhang, L. Chen, Prediction of membrane protein types by fusing protein-protein interaction and protein sequence information, Biochim. Biophys Acta Proteins Proteom., 1868 (2020), 140524. https://doi.org/10.1016/j.bbapap.2020.140524 doi: 10.1016/j.bbapap.2020.140524
[2]
H. Wang, Y. Ding, J. Tang, F. Guo, Identification of membrane protein types via multivariate information fusion with Hilbert-Schmidt independence criterion, Neurocomputing, 383 (2020), 257–269. https://doi.org/10.1016/j.neucom.2019.11.103 doi: 10.1016/j.neucom.2019.11.103
[3]
K. Chou, D. W. Elrod, Prediction of membrane protein types and subcellular locations, Proteins, 34 (1999), 137–153.
[4]
K. Chou, Prediction of protein cellular attributes using pseudo-amino acid composition, Proteins, 43 (2001), 246–255. https://doi.org/10.1002/prot.1035 doi: 10.1002/prot.1035
[5]
M. Wang, J. Yang, G. Liu, Z. Xu, K. Chou, Weighted-support vector machines for predicting membrane protein types based on pseudo-amino acid composition, Protein Eng. Des. Sel., 17 (2004), 509–516. https://doi.org/10.1093/protein/gzh061 doi: 10.1093/protein/gzh061
[6]
H. Liu, M. Wang, K. Chou, Low-frequency Fourier spectrum for predicting membrane protein types, Biochem. Biophys. Res. Commun., 336 (2005), 737–739. https://doi.org/10.1016/j.bbrc.2005.08.160 doi: 10.1016/j.bbrc.2005.08.160
[7]
K. Chou, H. Shen, MemType-2L: A web server for predicting membrane proteins and their types by incorporating evolution information through Pse-PSSM, Biochem. Biophys. Res. Commun., 360 (2007), 339–345. https://doi.org/10.1016/j.bbrc.2007.06.027 doi: 10.1016/j.bbrc.2007.06.027
[8]
M. A. Rezaei, P. Abdolmaleki, Z. Karami, E. B. Asadabadi, M. A. Sherafat, H. Abrishami-Moghaddam, et al., Prediction of membrane protein types by means of wavelet analysis and cascaded neural networks, J. Theor. Biol., 254 (2008), 817–820. https://doi.org/10.1016/j.jtbi.2008.07.012 doi: 10.1016/j.jtbi.2008.07.012
[9]
L. Wang, Z. Yuan, X. Chen, Z. Zhou, The prediction of membrane protein types with NPE, IEICE Electron. Express, 7 (2010), 397–402. https://doi.org/10.1587/elex.7.397 doi: 10.1587/elex.7.397
[10]
M. Hayat, A. Khan, Predicting membrane protein types by fusing composite protein sequence features into pseudo amino acid composition, J. Theor. Biol., 271 (2011), 10–17. https://doi.org/10.1016/j.jtbi.2010.11.017 doi: 10.1016/j.jtbi.2010.11.017
[11]
M. Hayat, A. Khan, M. Yeasin, Prediction of membrane proteins using split amino acid and ensemble classification, Amino Acids, 42 (2012), 2447–2460. https://doi.org/10.1007/s00726-011-1053-5 doi: 10.1007/s00726-011-1053-5
[12]
M. Hayat, A. Khan, MemHyb: predicting membrane protein types by hybridizing SAAC and PSSM, J. Theor. Biol., 292 (2012), 93–102. https://doi.org/10.1016/j.jtbi.2011.09.026 doi: 10.1016/j.jtbi.2011.09.026
[13]
Y. Chen, K. Li, Predicting membrane protein types by incorporating protein topology, domains, signal peptides, and physicochemical properties into the general form of Chou's pseudo amino acid composition, J. Theor. Biol., 318 (2013), 1–12. https://doi.org/10.1016/j.jtbi.2012.10.033 doi: 10.1016/j.jtbi.2012.10.033
[14]
G. Han, Z. Yu, V. Anh, A two-stage SVM method to predict membrane protein types by incorporating amino acid classifications and physicochemical properties into a general form of Chou's PseAAC, J. Theor. Biol., 344 (2014), 31–39. https://doi.org/10.1016/j.jtbi.2013.11.017 doi: 10.1016/j.jtbi.2013.11.017
[15]
S. Wan, M. Mak, S. Kung, Mem-mEN: Predicting multi-functional types of membrane proteins by interpretable elastic nets, IEEE/ACM Trans. Comput. Biol. Bioinf., 13 (2016), 706–718. https://doi.org/10.1109/TCBB.2015.2474407 doi: 10.1109/TCBB.2015.2474407
[16]
W. Lu, J. Shen, Y. Zhang, H. Wu, Y. Qian, X. Chen, et al., Identifying membrane protein types based on lifelong learning with dynamically scalable networks, Front. Genet., 12 (2022), 2787. https://doi.org/10.3389/fgene.2021.834488 doi: 10.3389/fgene.2021.834488
[17]
Y. Wang, Y. Zhai, Y. Ding, Q. Zou, SBSM-Pro: Support bio-sequence machine for proteins, preprint, arXiv: 2308.10275.
[18]
J. B. Pereira-Leal, E. D. Levy, S. A. Teichmann, The origins and evolution of functional modules: lessons from protein complexes, Philos. Trans. R. Soc. B Biol. Sci., 361 (2006), 507–517. https://doi.org/10.1098/rstb.2005.1807 doi: 10.1098/rstb.2005.1807
[19]
E. D. Levy, J. B. Pereira-Leal, C. Chothia, S. A. Teichmann, 3D complex: A structural classification of protein complexes, PLoS Comput. Biol., 2 (2006), 155. https://doi.org/10.1371/journal.pcbi.0020155 doi: 10.1371/journal.pcbi.0020155
[20]
J. Huang, X. Huang, J. Yang, Residual enhanced multi-hypergraph neural network, in 2021 IEEE International Conference on Image Processing (ICIP), (2021), 3657–3661. https://doi.org/10.1109/ICIP42928.2021.9506153
[21]
K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 770–778. https://doi.org/10.1109/CVPR.2016.90
[22]
M. Chen, Z. Wei, Z. Huang, B. Ding, Y. Li, Simple and deep graph convolutional networks, in Proceedings of the 37th International Conference on Machine Learning, (2020), 1725–1735.
[23]
B. Boeckmann, A. Bairoch, R. Apweiler, M. Blatter, A. Estreicher, E. Gasteiger, et al., The SWISS-PROT protein knowledgebase and its supplement TrEMBL in 2003, Nucleic Acids Res., 31 (2003), 365–370. https://doi.org/10.1093/nar/gkg095 doi: 10.1093/nar/gkg095
[24]
W. Li, A. Godzik, Cd-hit: a fast program for clustering and comparing large sets of protein or nucleotide sequences, Bioinformatics, 22 (2006), 1658–1659. https://doi.org/10.1093/bioinformatics/btl158 doi: 10.1093/bioinformatics/btl158
[25]
S. F. Altschul, T. L. Madden, A. A. Schäffer, J. Zhang, Z. Zhang, W. Miller, et al., Gapped BLAST and PSI-BLAST: a new generation of protein database search programs, Nucleic Acids Res., 25 (1997), 3389–3402. https://doi.org/10.1093/nar/25.17.3389 doi: 10.1093/nar/25.17.3389
[26]
J. C. Jeong, X. Lin, X. Chen, On position-specific scoring matrix for protein function prediction, IEEE/ACM Trans. Comput. Biol. Bioinf., 8 (2011), 308–315. https://doi.org/10.1109/TCBB.2010.93 doi: 10.1109/TCBB.2010.93
L. Nanni, S. Brahnam, A. Lumini, Wavelet images and Chou's pseudo amino acid composition for protein classification, Amino Acids, 43 (2012), 657–665. https://doi.org/10.1007/s00726-011-1114-9 doi: 10.1007/s00726-011-1114-9
[29]
B. Schölkopf, J. Platt, T. Hofmann, Learning with hypergraphs: Clustering, classification, and embedding, in Advances in Neural Information Processing Systems 19, MIT Press, (2007), 1601–1608.
[30]
Y. Gao, M. Wang, D. Tao, R. Ji, Q. Dai, 3-D object retrieval and recognition with hypergraph analysis, IEEE Trans. Image Process., 21 (2012), 4290–4303. https://doi.org/10.1109/TIP.2012.2199502 doi: 10.1109/TIP.2012.2199502
[31]
Y. Feng, H. You, Z. Zhang, R. Ji, Y. Gao, Hypergraph neural networks, in The Thirty-Third AAAI Conference on Artificial Intelligence, 33 (2019), 3558–3565. https://doi.org/10.1609/aaai.v33i01.33013558
[32]
J. Gasteiger, A. Bojchevski, S. Günnemann, Predict then propagate: Graph neural networks meet personalized pageRank, in Seventh International Conference on Learning Representations, (2019).
[33]
A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, et al., PyTorch: An imperative style, high-performance deep learning library, in Proceedings of the 33rd International Conference on Neural Information Processing Systems, (2019), 8026–8037.
[34]
W. Lu, M. Qian, Y. Zhang, H. Wu, Y. Ding, J. Shen, et al., Identification of membrane protein types based using hypergraph neural network, Curr. Bioinf., 18 (2023), 346–358. http://doi.org/10.2174/1574893618666230224143726 doi: 10.2174/1574893618666230224143726
[35]
W. Wang, L. Zhang, J. Sun, Q. Zhao, J. Shuai, Predicting the potential human lncRNA–miRNA interactions based on graph convolution network with conditional random field, Briefings Bioinf., 23 (2022), 463. https://doi.org/10.1093/bib/bbac463 doi: 10.1093/bib/bbac463
[36]
H. Hu, Z. Feng, H. Lin, J. Cheng, J. Lyu, Y. Zhang, et al., Gene function and cell surface protein association analysis based on single-cell multiomics data, Comput. Biol. Med., 157 (2023), 106733. https://doi.org/10.1016/j.compbiomed.2023.106733 doi: 10.1016/j.compbiomed.2023.106733
[37]
L. Zhang, P. Yang, H. Feng, Q. Zhao, H. Liu, Using network distance analysis to predict lncRNA–miRNA interactions, Interdiscip. Sci., 13 (2021), 535-545. https://doi.org/10.1007/s12539-021-00458-z doi: 10.1007/s12539-021-00458-z
[38]
F. Sun, J. Sun, Q. Zhao, A deep learning method for predicting metabolite–disease associations via graph neural network, Briefings Bioinf., 23 (2022), 266. https://doi.org/10.1093/bib/bbac266 doi: 10.1093/bib/bbac266
[39]
T. Wang, J. Sun, Q. Zhao, Investigating cardiotoxicity related with hERG channel blockers using molecular fingerprints and graph attention mechanism, Comput. Biol. Med., 153 (2023), 106464. https://doi.org/10.1016/j.compbiomed.2022.106464 doi: 10.1016/j.compbiomed.2022.106464
[40]
H. Gao, J. Sun, Y. Wang, Y. Lu, L. Liu, Q. Zhao, et al., Predicting metabolite–disease associations based on auto-encoder and non-negative matrix factorization, Briefings Bioinf., 24 (2023), 259. https://doi.org/10.1093/bib/bbad259 doi: 10.1093/bib/bbad259
Table 5.
Comparison of the ACC, macro average of F1-score between DRHGNN and HGNN with different depths on four datasets. The best result of methods for each dataset is bolded.
Figure 1. The schematic diagram of our proposed method
Figure 2. The comparison between graph and hypergraph
Figure 3. The schematic diagram of hyperedge generation and hypergraph generation
Figure 4. The DRHGNN framework. FC represents a fully connected layer
Figure 5. The performance comparison of DRHGNN and HGNN with different layers on membrane protein classification task. (a) The performance comparison of DRHGNN and HGNN on Dataset 1; (b) The performance comparison of DRHGNN and HGNN on Dataset 2; (c) The performance comparison of DRHGNN and HGNN on Dataset 3; (d) The performance comparison of DRHGNN and HGNN on Dataset 4
Figure 6. Stability analysis. The performance of DRHGNN and HGNN with different training ratios on membrane protein classification task. (a) The performance on Dataset 1; (b) The performance on Dataset 2; (c) The performance on Dataset 3; (d) The performance on Dataset 4
Figure 7. Ablation study on initial residual and identity mapping. The performance comparison of DRHGNN, HGNN, HGNN with initial residual, HGNN with identity mapping with different layers on membrane protein classification task. (a) The performance comparison on Dataset 1; (b) The performance comparison on Dataset 2; (c) The performance comparison on Dataset 3; (d) The performance comparison on Dataset 4