Research article Special Issues

Sustainable practices to reduce environmental impact of industry using interaction aggregation operators under interval-valued Pythagorean fuzzy hypersoft set

  • Received: 27 February 2023 Revised: 08 April 2023 Accepted: 11 April 2023 Published: 20 April 2023
  • MSC : 03E72, 68T35, 90B50

  • Optimization techniques can be used to find the optimal combination of inputs and parameters and help identify the most efficient solution. Aggregation operators (AOs) play a prominent role in discernment between two circulations of prospect and pull out anxieties from that insight. The most fundamental objective of this research is to extend the interaction AOs to the interval-valued Pythagorean fuzzy hypersoft set (IVPFHSS), the comprehensive system of the interval-valued Pythagorean fuzzy soft set (IVPFSS). The IVPFHSS adroitly contracts with defective and ambagious facts compared to the prevalent Pythagorean fuzzy soft set and interval-valued intuitionistic fuzzy hypersoft set (IVIFHSS). It is the dominant technique for enlarging imprecise information in decision-making (DM). The most important intention of this exploration is to intend interactional operational laws for IVPFHSNs. We extend the AOs to interaction AOs under IVPFHSS setting such as interval-valued Pythagorean fuzzy hypersoft interactive weighted average (IVPFHSIWA) and interval-valued Pythagorean fuzzy hypersoft interactive weighted geometric (IVPFHSIWG) operators. Also, we study the significant properties of the proposed operators, such as Idempotency, Boundedness, and Homogeneity. Still, the prevalent multi-criteria group decision-making (MCGDM) approaches consistently carry irreconcilable consequences. Meanwhile, our proposed MCGDM model is deliberate to accommodate these shortcomings. By utilizing a developed mathematical model and optimization technique, Industry 5.0 can achieve digital green innovation, enabling the development of sustainable processes that significantly decrease environmental impact. The impacts show that the intentional model is more operative and consistent in conducting inaccurate data based on IVPFHSS.

    Citation: Nadia Khan, Sehrish Ayaz, Imran Siddique, Hijaz Ahmad, Sameh Askar, Rana Muhammad Zulqarnain. Sustainable practices to reduce environmental impact of industry using interaction aggregation operators under interval-valued Pythagorean fuzzy hypersoft set[J]. AIMS Mathematics, 2023, 8(6): 14644-14683. doi: 10.3934/math.2023750

    Related Papers:

    [1] Lu Yuan, Yuming Ma, Yihui Liu . Protein secondary structure prediction based on Wasserstein generative adversarial networks and temporal convolutional networks with convolutional block attention modules. Mathematical Biosciences and Engineering, 2023, 20(2): 2203-2218. doi: 10.3934/mbe.2023102
    [2] Xing Hu, Minghui Yao, Dawei Zhang . Road crack segmentation using an attention residual U-Net with generative adversarial learning. Mathematical Biosciences and Engineering, 2021, 18(6): 9669-9684. doi: 10.3934/mbe.2021473
    [3] Boyang Wang, Wenyu Zhang . ACRnet: Adaptive Cross-transfer Residual neural network for chest X-ray images discrimination of the cardiothoracic diseases. Mathematical Biosciences and Engineering, 2022, 19(7): 6841-6859. doi: 10.3934/mbe.2022322
    [4] Ying Xu, Jinyong Cheng . Secondary structure prediction of protein based on multi scale convolutional attention neural networks. Mathematical Biosciences and Engineering, 2021, 18(4): 3404-3422. doi: 10.3934/mbe.2021170
    [5] Jiajia Jiao, Xiao Xiao, Zhiyu Li . dm-GAN: Distributed multi-latent code inversion enhanced GAN for fast and accurate breast X-ray image automatic generation. Mathematical Biosciences and Engineering, 2023, 20(11): 19485-19503. doi: 10.3934/mbe.2023863
    [6] Shuaiyu Bu, Yuanyuan Li, Wenting Ren, Guoqiang Liu . ARU-DGAN: A dual generative adversarial network based on attention residual U-Net for magneto-acousto-electrical image denoising. Mathematical Biosciences and Engineering, 2023, 20(11): 19661-19685. doi: 10.3934/mbe.2023871
    [7] Binjie Hou, Gang Chen . A new imbalanced data oversampling method based on Bootstrap method and Wasserstein Generative Adversarial Network. Mathematical Biosciences and Engineering, 2024, 21(3): 4309-4327. doi: 10.3934/mbe.2024190
    [8] Jiyun Shen, Yiyi Xia, Yiming Lu, Weizhong Lu, Meiling Qian, Hongjie Wu, Qiming Fu, Jing Chen . Identification of membrane protein types via deep residual hypergraph neural network. Mathematical Biosciences and Engineering, 2023, 20(11): 20188-20212. doi: 10.3934/mbe.2023894
    [9] Hui Yao, Yuhan Wu, Shuo Liu, Yanhao Liu, Hua Xie . A pavement crack synthesis method based on conditional generative adversarial networks. Mathematical Biosciences and Engineering, 2024, 21(1): 903-923. doi: 10.3934/mbe.2024038
    [10] Hong'an Li, Man Liu, Jiangwen Fan, Qingfang Liu . Biomedical image segmentation algorithm based on dense atrous convolution. Mathematical Biosciences and Engineering, 2024, 21(3): 4351-4369. doi: 10.3934/mbe.2024192
  • Optimization techniques can be used to find the optimal combination of inputs and parameters and help identify the most efficient solution. Aggregation operators (AOs) play a prominent role in discernment between two circulations of prospect and pull out anxieties from that insight. The most fundamental objective of this research is to extend the interaction AOs to the interval-valued Pythagorean fuzzy hypersoft set (IVPFHSS), the comprehensive system of the interval-valued Pythagorean fuzzy soft set (IVPFSS). The IVPFHSS adroitly contracts with defective and ambagious facts compared to the prevalent Pythagorean fuzzy soft set and interval-valued intuitionistic fuzzy hypersoft set (IVIFHSS). It is the dominant technique for enlarging imprecise information in decision-making (DM). The most important intention of this exploration is to intend interactional operational laws for IVPFHSNs. We extend the AOs to interaction AOs under IVPFHSS setting such as interval-valued Pythagorean fuzzy hypersoft interactive weighted average (IVPFHSIWA) and interval-valued Pythagorean fuzzy hypersoft interactive weighted geometric (IVPFHSIWG) operators. Also, we study the significant properties of the proposed operators, such as Idempotency, Boundedness, and Homogeneity. Still, the prevalent multi-criteria group decision-making (MCGDM) approaches consistently carry irreconcilable consequences. Meanwhile, our proposed MCGDM model is deliberate to accommodate these shortcomings. By utilizing a developed mathematical model and optimization technique, Industry 5.0 can achieve digital green innovation, enabling the development of sustainable processes that significantly decrease environmental impact. The impacts show that the intentional model is more operative and consistent in conducting inaccurate data based on IVPFHSS.



    Proteins play an extremely important role in our daily activities, with functions such as immunity and cell signaling. Their different functions are due to their different structures. Therefore, to fully understand the functions of proteins and related research, it is necessary to predict their structure. Although the advent of AlphaFold2 [1] has changed the protein prediction landscape, it has achieved very reliable results for the prediction of protein tertiary structure [2], as the prediction of secondary structure is still of great significance, because the secondary structure will improve the alignment of the tertiary structure, thereby affecting the spatial morphology of the protein, so this paper proposes a method based on deep learning to predict the secondary structure of proteins.

    Protein secondary structure is the local spatial conformation of amino acid residues in protein polypeptide chains, mainly in the form of 3-states (helix (H), chain (E), coil (C)), which can be divided into 8-states, namely α-helix (H), helix (G), π-helix (I), β-bridge (B), β-sheet (E), bend (S), turn (T) and coil (C) [3,4,5]. This study was devoted to the 8-state prediction of proteins, which can be more informative and more challenging.

    In the 1990s, Burkhard Rost and Chris Sander first used neural networks to predict the secondary structure of proteins [6]. In addition to achieving excellent results, this method was pioneered in the field of protein structure prediction. Early protein secondary structure prediction used statistical methods and heuristic rules [7], such as Support Vector Machine [8], Bayesian classification algorithm, Markov model [9], and Feedforward neural network [10,11] that have been applied in the prediction of protein secondary structure. With the advent of the post-genomic era, the amount of protein data has increased. Owing to the high cost and difficulty of experiments, traditional experimental determination methods have been unable to meet the growing demand for protein and structural data analyses. Therefore, methods for protein structure prediction have become a hot issue in bioinformatics. In the last few years, as deep learning has made tremendous progress in natural language processing, machine vision and speech recognition, bioinformatics has also begun to extensively use deep learning methods.

    In recent years, many scholars and researchers have achieved excellent results in the field of 8-state research on protein secondary structure. Busia et al. proposed a protein sequence prediction technique, which combined the successful experience of using convolutional neural networks in the past and language modeling, and achieved good results [12]. Using the combined synergy of a convolutional neural network, residual network and bidirectional recurrent neural network prediction, Zhang et al. [13] designed a local block composed of convolutional filters and raw input to capture local Sequence Features. Krieger et al. determined estimated class membership probabilities of residues in proteins using the nearest neighbor search, which is then fed into another dynamic programming algorithm, showing good results on the CASP dataset [14]. Uddin et al. proposed to combine the self-attention mechanism with the Deep Inception-Inside-Inception (Deep3I) network to track residues between amino acids at different distances through interaction [15]. Kotowski et al. proposed a single-sequence-based method called ProteinUnet, which effectively shortens the inference time, and improves the training speed [16]. Sonsare and Gunavathi proposed a model consisting of a 1D-Convnet and an improved recurrent neural network with an improved sequential coin toss optimizer, achieving good prediction accuracy on CB513 and CullPDB [17].

    This paper proposes an 8-state protein secondary structure prediction method named WG-ICRN, as it based on WGAN and ResNet with Inception. WG-ICRN extracts the feature information of the protein use WGAN, and then combines this information with PSSM [18] to enhance the features, and the combined feature matrix is named WG-data. The increased length and width of WG-data makes its feature maps larger in area and richer in feature information, since WG-data was input into the ICRN module as input data. ICRN was a transformation of the residual network. Inception was introduced into the residual network to replace the convolution layer, and the width of the input data feature map was increased through multi-scale convolution to further enrich the features.

    The main contributions of this study are: (1) We use WGAN to extract protein information in sliding window processed PSSM, and combine PSSM to build a new feature set with rich protein features. (2) The ICRN model combines Inception and the residual network, increasing the width of the network through Inception, while the residual network guarantees the depth of the network, improving the performance of the network from two aspects. (3) ICRN reduces the number of training parameters by using multiple smaller filters to reduce the dimension of the data, so the training time is shorter than the residual network, saving system resources. (4) Experimental results show that WG-ICRN is superior to other popular models in prediction accuracy.

    Generative adversarial network (GAN) [19] was proposed by Ian Goodfellow in 2014, and consists of two parts: generator (G) and discriminator (D). G can generate similar fake data by learning the distribution characteristics of real data, while D judges and scores the authenticity of the data. GAN has been applied to image denoising and feature extraction [20,21,22], and has been proved to have good properties. GAN also has the problem that the model is difficult to optimize, as the tedious problem of G and D parameter optimization is difficult to solve. In recent years, a lot of optimization algorithms [23,24,25,26], such as Aquila optimizer [27] and the Gazelle optimization algorithm [28], provide a direction to solve this difficult problem. But the more critical issue for GAN is this: Owing to the approximate optimal D of GAN, the G loss faces the problem of gradient disappearance. The WGAN uses the Wasserstein distance, which can alleviate this critical problem, and has the advantage of reflecting the distance between two distributions even if they do not have any overlap [29].

    The specific training process of WGAN is the constant game and confrontation between G and D. When training D, the data generated by the previous round of G and real data are directly spliced together as x, the fake data corresponded to 0, and the real data corresponds to 1. Then, a score (a number from 0 to 1) can be generated through D, x input, and through the loss function composed of the score and y, gradient backpropagation can be performed. The training process of D is shown in Figure 1(a). When training G, G and D need to be treated as a whole, which is named "D_on_G". The output of this whole system (referred to as the DG system) is still the score. Entering a set of random vectors z, we can generate a set of random data in G, and score the generated set of data through D to obtain the score, which is the forward process of the DG system. The training process is presented in Figure 1(b).

    Figure 1.  The training process WGAN.

    The GAN objective function is as formula (1), where, x and z represent the input real and random data, G(z) represents the data generated after G processes the random data z, and D(x) represents the probability that the data is the real data. In the most ideal case, G can generate data G(z) that is very similar to the real protein data, and it is difficult for D to judge the authenticity of these data, that is, D(G(z)) = 0.5.

    minGmaxD(D,G)=ExPdata(x)[logD(x)]+EzPz(x)[log(1D(G(z)))] (1)

    Objective function (1) to be optimized by the GAN can be divided into 2 parts: Part 1, fix the G and optimize the D, then (1) can be rewritten as formula (2), convert it to minimized form as formula (3). Part 2, fix the D, optimize the G, which is equivalent to minimizing, as formula (4), so that the argument of D does not exceed a fixed constant, just maximize the formula (5).

    maxDExPr[logD(x)]+ExPg[log(1D(x))] (2)
    minDExPr[logD(x)]ExPg[log(1D(x))] (3)
    minGExPg[log(1D(x))] (4)
    L=ExPr[D(x)]ExPg[log(D(x))] (5)

    In this experiment, we introduced CNN in WGAN to assist in feature extraction. Local receptive fields and weight sharing operations in CNN can realize displacement, scaling and distortion invariance. We use ReLU as the activation function of the CNN, which is calculated as Eq. (6).

    Fik=f(hPi1hWik+b) (6)

    Here, f is ReLU, which Pi1h represents the feature map obtained from the input protein data and the convolution kernel of the previous layer, Wik is the convolution kernel of the No. i layer, k is the number of convolution kernel, and b is the bias parameter. At the same time, we use gradient punishment to improve the stability of the network during WGAN training. The network structure of WGAN used in the experiment is shown in Figure 2.

    Figure 2.  WGAN model diagram.

    The network depth is very important for the performance of the model, but, in fact, the deep network will face degradation problems, and the accuracy will also decrease. Studies have shown that this deep network has the problem of gradient explosion or disappearance, and Residual Networks (ResNet) [30] introduces the residual learning to alleviate this problem. Nowadays, ResNet is used in computer vision and medical analysis [31,32].

    The specific process is that for a block structure, where the learned characteristics from when the input is X are recorded as H(X), and we hope that the residual F(X) = H(X) - X can be learned, when the original characteristics are F(X) + X. Because residual learning is easier than the original feature direct learning, when the residual is 0, the block only makes the constant mapping, which makes the network performance not decline, but in fact the residual will not be 0, which will also make the block learn the new feature on the basis of the input feature, so that it has better performance. Residual learning is similar to short-circuit connections, and is structured as shown in Figure 3.

    Figure 3.  Structure of residual learning.

    The origin of the residual block structure consists of convolution and pooling before residual learning. The origin residual connection method is shown in Figure 4(a). The article [33] has conducted a more detailed analysis experiment on the residual structure and obtained the optimal residual learning structure, that is, batch normalization and ReLU were performed before convolutional layers, and the structure is shown in Figure 4(b).

    Figure 4.  The residual block structure.

    In 2014, Szegedy et al. proposed the Inception structure for the first time [34]. Inception performs convolution operations on the feature map at a certain moment by using convolution kernels of different sizes, so as to obtain a new feature map, and then samples the size of the input feature according to the feature map of different sizes. It is worth noting that Inception does not change the size of the original features, but only enriches the characteristic information of the protein through different convolution kernels, making the characteristics diversified. The network structure of InceptionV2 is shown in Figure 5.

    Figure 5.  The network structure of InceptionV2.

    In this experiment, we use the improved Inception module instead of the first convolutional layer and maxpooling layer in the ResNet model, and the improved Inception module informs the WG-data to extract learning at different scales through convolutional kernels of different sizes, which enriches the feature information and improves the prediction accuracy of protein secondary structure prediction. Improved Inception module structure is shown in Figure 6.

    Figure 6.  The improved Inception module structure.

    In this paper, we use ICRN-N to represent the improved ResNet of different depths, and N refers to the number of network layers with privileged values, that is, only convolutional layers, as fully connected layers and pooled layers are included. We set the number of layers with weights of 10, 18 and 34 as the experimental model, and the structures of ICRN-10, ICRN-18, and ICRN-34 are shown in Table 1, respectively.

    Table 1.  ICRN structure at different depths.
    Layer name ICRN-10 ICRN-18 ICRN-34
    Inception Block [1×13×33×33×3][1×13×33×3][Maxpool1×1]
    Residuals-Block-1 [3×3,643×3,64]×2 [3×3,643×3,64]×2 [3×3,643×3,64]×3
    Residuals-Block-2 [3×3,1283×3,128]×2 [3×3,1283×3,128]×2 [3×3,1283×3,128]×4
    Residuals-Block-3 / [3×3,2563×3,256]×2 [3×3,2563×3,256]×6
    Residuals-Block-4 / [3×3,5123×3,512]×2 [3×3,5123×3,512]×3
    Average pool, fully connected, softmax

     | Show Table
    DownLoad: CSV

    The structure of WG-ICRN is shown in Figure 7, and it can be seen that our network model is mainly divided into the WGAN and ICRN modules. Firstly, a protein was processed into PSSMS with size of 20 × L, where 20 is the feature dimension, and L represents the protein length. Since the lengths of different proteins were different, sliding Windows (The length is W) were used to cut PSSMS. The processed data would be used as the learning model of WGAN, and key features would be extracted through the confrontation of G and D. We use several convolutional layers to assist G and D networks; G networks use Leaky ReLU as the activation function, due to the large number of iterations, and to prevent overfitting, we use Dropout in G networks. We Concatenated the final data (Si-data) generated by D and the PSSM processed by sliding window into a matrix of 40 × W, named WG-data.

    Figure 7.  WG-ICRN networks structure.

    The ICRN module consists of two parts, namely Inception block and residual block. The improved Inception will replace the first convolution layer with a size of 7 × 7, and the max pooling layer with a size of 3 × 3 in this model. The improved Inception can achieve the same convolution effect by using three layers of 3 × 3 convolution layers with fewer training parameters, which will save training time. At the same time, the multi-scale convolution model in Inception can extract feature maps of different sizes, which, when combined together, will increase the richness of features to some extent.

    After two feature enhancements, the residual block in ICRN will conduct the final training on the data. We respectively use the residual network of different depths to test the data. At the end of the network, we adopt an average pooling layer to replace the flattening of the matrix features, which reduces the number of parameters.

    Finally, in the output layer of the model, we use the fully connected layer and softmax layer to output the final prediction results and calculate the prediction accuracy through the evaluation criteria.

    The main public datasets used in this study are the CullPDB [35] dataset and the datasets [36,37,38,39,40,41] CASP10, CASP11, CASP12, CASP13, CASP14 and CB513. The CullPDB dataset contains 12,288 proteins. These datasets show that the similarity of the data was less than 25%. In this study, the repeated protein dataset CullPDB was removed as the training set, with a total of 11650 proteins. For the CASP10-14, and CB513 datasets, there were 99, 81, 19, 22, 24 and 513 protein chains, respectively. The number of protein sequences in datasets is listed in Table 2.

    Table 2.  Statistical data in datasets.
    Datasets Number of proteins
    CullPDB 11650
    CASP10 99
    CASP11 81
    CASP12 19
    CASP13 22
    CASP14 24
    CB513 513

     | Show Table
    DownLoad: CSV

    Position-Specific Scoring Matrix (PSSM) is rich in biological evolution information, which greatly improves the accuracy of protein secondary structure prediction. It is a widely used feature for information. The PSSM of this experiment was generated by multiple sequence alignment of proteins in the NR database, setting the PSI-BLAST [42] parameter threshold to 0.001 and 3 iterations.

    Q8 and SOV are evaluation criteria for evaluating protein prediction performance from different perspectives. Q8 is the ratio of the number of correctly predicted amino acids to all amino acids. It can be expressed by formula (7), and S is the total number of amino acids.

    Q8=SH+SE+SG+SB+SI+SS+ST+SCS×100 (7)

    Among them, SH, SE, SG, SB, SI, SS, ST and SC are the numbers of correctly predicted α-helices, beta-sheets, β bridges, 310 helices, π helices, turns, β-turns and random coils, respectively, and S is the total number of amino acids. The secondary structure accuracy of a state is calculated as formula (8).

    Qj=SjNj,j{H,E,G,B,I,S,T,C} (8)

    SOV [43] is a measure based on the ratio of overlapping segments. Assuming that all observed structural fragments are labeled Sob, all predicted fragments are labeled Spr, and So is a fragment with the same state as Sob and Spr, and for any pair of fragments in So, the actual length is minov (Sob, Spr), where at least one residue has a total length of maxov (Sob, Spr). The SOV calculation formula as formula (9).

    SOV=100NSOVSo[minov(Sob,Spr)+σ(Sob,Spr)maxov(Sob,Spr)length(Sob)] (9)

    Among them, σ(Sob,Spr) allows the change of the observed fragment boundary in the protein structure, which is defined by the formula (10).

    σ(Sob,Spr)=min{(maxov(Sob,Spr)minov(Sob,Spr))minov(Sob,Spr)int[len(Sob)]/2int[len(Spr)]/2} (10)

    The experiment in this paper was done with the processor Intel(R) Xeon(R) Glod 5118, and the graphics card RTX2080Ti and the system Linux. Firstly, we tested the influence of the number of CNN convolutional layers on the WGAN feature extraction ability. The size of the convolution kernel was set to 3 × 3 × 64, and different convolutional layers were set to 1, 2, 3, 4 and 5, and tested on CASP11-14. As can be seen from Table 3, when the number of convolutional layers is 3, the data generated by G are closer to the real data.

    Table 3.  Effect of the number of convolutional layers on Q8.
    Layers CASP11 CASP12 CASP13 CASP14
    1 68.26 68.85 67.21 68.36
    2 71.27 70.63 67.76 69.41
    3 72.55 71.81 69.88 70.29
    4 71.71 71.23 68.73 68.22
    5 70.33 69.46 67.24 67.61

     | Show Table
    DownLoad: CSV

    Because the number of iterations of the generator and discriminator will also affect the feature extraction ability of the WGAN, this study tested the influence of different iterations on the experiment, in which 3 convolutional layers are set, and the parameters of the convolution kernel are set to 3 × 3 × 64, 3 × 3 × 128 and 3 × 3 × 256, and the experimental results under different iterations are shown in Figure 8.

    Figure 8.  Q8 accuracy with different iterations.

    As shown in Figure 7, the best effect is achieved when the number of iterations is 20,000, that is, the features extracted by G are the most realistic and effective. After more than 20,000 iterations, D's ability to judge the authenticity of the generated data decreases to the point where there is a large error between the simulated and real features.

    To test the influence of the length of sliding window on the experimental results, we selected 13, 15, 17, 19 and 21 for Q8 prediction. The experimental results are shown in Table 4, which shows that when the sliding window is 19, the experimental results are the best.

    Table 4.  Q8 accuracy under different length of sliding windows.
    Sliding window CASP11 CASP12 CASP13 CASP14
    13 67.47 68.16 65.88 65.20
    15 68.09 69.27 66.67 67.41
    17 70.66 70.24 68.18 69.13
    19 71.55 70.81 68.88 69.29
    21 70.29 70.41 68.42 68.94

     | Show Table
    DownLoad: CSV

    Using different depths of ResNet, we tested CASP11-14, and obtained the experimental results shown in Figure 9. It can be seen that WG-ICRN-18 has the highest accuracy, because the dimension of WG-data is not high, and when the number of layers is too deep, part of the data will be lost, which causes a decrease in accuracy. In addition, we calculate the SOV and Qj (j{H,E,G,B,I,S,T,C}) of each test set under the WG-ICRN method, and the results are shown in Table 5.

    Figure 9.  Q8 accuracy under WG-ICRN at different depths.
    Table 5.  Q8 and SOV accuracy in the datasets.
    Dataset CASP10 CASP11 CASP12 CASP13 CASP14 CB513
    SOV 70.98 69.37 68.83 67.41 66.39 73.91
    Q8 73.32 71.55 70.81 68.88 69.29 75.56
    QG 52.72 55.32 47.90 36.56 33.51 37.71
    QH 92.66 85, 74 87.43 93.2 89.75 92.25
    QI 0 0 0 0 0 0
    QT 62.21 49.47 56.78 57.31 45.29 53.67
    QB 9.81 19.30 7.25 8.30 3.88 7.68
    QE 88.80 82.15 77.76 84.37 76.79 80.44
    QS 53.88 43.68 48.90 34.74 13.33 25.39
    QC 68.12 62.91 67.38 70.71 68.54 71.37

     | Show Table
    DownLoad: CSV

    This paper divides CullPDB into five parts for five-fold cross-validation, four as training sets and one as test, and the results of cross-validation are shown in Table 6.

    Table 6.  Q8 under five-fold cross-validation.
    1 2 3 4 5 Average
    Q8 72.72 73.18 73.43 72.61 71.36 72.66

     | Show Table
    DownLoad: CSV

    We did ablation experiments to demonstrate the importance of each structure. We used five network models to test CASP11-14, and the experimental results are presented in Table 7. Thus, WG-ICRN is the model proposed in this paper, WG-Res is combining WGAN and ResNet and WG-CNN is a network model combining WGAN and CNN, where the three methods input data adopts WG-data, and the network model structure of CNN uses 3 convolutional layers: 3 × 3 × 64, 3 × 3 × 128 and 3 × 3 × 256. ResNet is a residual network model based on the best ResNet-18, CNN is using a 3-layer convolutional neural network, a structure is 3 × 3 × 64, 3 × 3 × 128, and 3 × 3 × 256. The input data for ResNet and CNN were PSSM. In addition, we calculated the average training time of each of the 11650 proteins in the CullPDB dataset for the 5 methods. These results are shown in Table 7.

    Table 7.  Comparison of results from ablation experiments.
    CASP11 CASP12 CASP13 CASP14 Training time (s)
    WG-ICRN 71.55 70.81 68.88 69.29 21.9
    WG-Res 71.43 70.67 68.83 69.17 22.4
    WG-CNN 70.47 68.79 67.33 68.24 21.7
    ResNet 68.76 67.84 65.57 66.19 9.8
    CNN 66.62 65.29 63.69 64.71 9.6

     | Show Table
    DownLoad: CSV

    By comparing the experimental results of the five methods in the table, it can be seen that, when the input data is the same PSSM, the prediction accuracy of ResNet is higher than that of CNN, because the deeper number of network layers makes training more adequate and increases training time, but the efficiency of ResNet is still better than that of CNN. WGAN extracted features significantly improves the prediction accuracy of Q8, and greatly increases the training time because of the increased size of the training data. Our proposed ICRN model reduces the time complexity by introducing Inception and extracts horizontal multi-scale feature fusion, which reduces the training time and improves the prediction accuracy compared with ResNet.

    Furthermore, we compared other models with our proposed method. Common with WG-ICRN is that these methods are improvements or hybrid models of CNN, and among them is, DeepACLSTM [44], which combines asymmetric convolution (ACNN) and bidirectional long short-term memory neural network (BiLSTM), 1D-Inception [45] Taking inspiration from InceptionV3 to extract features from 1D sequences using several parallel convolutions, DCRNN [46] uses an end-to-end model with multi-scale CNN and stacked bidirectional GRU. CNN_BIGRU [47] used CNN and bidirectional gated recurrent units to prediction. We re-run the code of the above method on the same computer, and the training set uses the same as WG-Res, which has been screened by data, and contains a total of 11,650 proteins. The experimental results are shown in Table 8. By comparison, it can be seen that WG-ICRN has excellent performance in predicting the secondary structure of protein 8 states, because of the deep layers advantages of ResNet, and, in addition, the matrix will contain richer feature information than the one-dimensional sequence, so the experimental results as input data will be better.

    Table 8.  Q8 accuracy comparison of five methods.
    Method CASP10 CASP11 CASP12 CASP13 CASP14 CB513
    DeepACLSTM 73.09 71.49 70.35 68.91 68.81 75.51
    1D-Inception 71.86 70.07 69.78 67.51 68.3 74.68
    DCRNN 72.11 70.50 69.41 68.05 68.87 74.85
    CNN_BIGRU 71.87 70.94 69.67 67.83 68.69 75.54
    WG-ICRN 73.32 71.55 70.81 68.88 69.29 75.56

     | Show Table
    DownLoad: CSV

    The prediction of protein secondary structure is important work to comprehensively understand and explore the diverse functions and spatial structure of proteins. This paper combines WGAN and ICRN, for the first time, to propose a novel protein 8-state secondary structure prediction method, WG-ICRN. In WG-ICRN, WGAN can extract protein features in amino acid sequences, and then we combine PSSM with the extracted features into a new feature matrix WG-data that contains more protein feature information. We also use ICRN to further extract the residue interactions in WG-data and complete the prediction. We introduced the improved Inception module into ResNet and proposed the ICRN model, which cannot only reduce parameter calculation and improve efficiency, but also increase network width to improve network performance. We evaluate the proposed model on six datasets CASP10-14 and CB513. Experimental results show that the prediction performance of WG-ICRN is better than the four other popular methods. In addition, this paper also proves that WGAN has a powerful feature extraction ability, and the ICRN model can handle protein data more comprehensively, and the combination of the two has achieved remarkable results. However, it is difficult for WGAN to achieve the balance between generator and discriminator, which also makes training more tedious and time-consuming. In addition, secondary structure prediction is also slightly affected by residues in the global range, but WG-ICRN mainly focuses on local features and ignores long-range features. In future work, we will continue to optimize the feature extraction technique and fully utilize different feature information of protein sequences to improve prediction performance.

    The codes and datasets for this work are at https://github.com/ShunLi999/WG-ICRN.git

    This work was supported by the National Natural Science Foundation of China (grant number 61375013) and the Natural Science Foundation of Shandong Province (grant number ZR2013FM020).

    The authors declare there are no conflicts of interest.



    [1] L. A. Zadeh, Fuzzy sets, Inform. Control, 8 (1965), 338–353. https://doi.org/10.1016/S0019-9958(65)90241-X doi: 10.1016/S0019-9958(65)90241-X
    [2] F. Xiao, CaFtR: A fuzzy complex event processing method, Int. J. Fuzzy Syst., 24 (2022), 1098–1111. https://doi.org/10.1007/s40815-021-01118-6 doi: 10.1007/s40815-021-01118-6
    [3] I. B. Turksen, Interval valued fuzzy sets based on normal forms, Fuzzy Set. Syst., 20 (1986), 191–210. https://doi.org/10.1016/0165-0114(86)90077-1 doi: 10.1016/0165-0114(86)90077-1
    [4] K. Atanassov, Intuitionistic fuzzy sets, Fuzzy Set.. Syst., 20 (1986), 87–96. https://doi.org/10.1016/S0165-0114(86)80034-3 doi: 10.1016/S0165-0114(86)80034-3
    [5] W. Wang, X. Liu, Intuitionistic fuzzy geometric aggregation operators based on Einstein operations, Int. J. Intell. Syst., 26 (2011), 1049–1075. https://doi.org/10.1002/int.20498 doi: 10.1002/int.20498
    [6] F. Xiao, A distance measure for intuitionistic fuzzy sets and its application to pattern classification problems, IEEE T. Syst. Man Cy.-S., 51 (2019), 3980–3992. https://doi.org/10.1109/TSMC.2019.2958635 doi: 10.1109/TSMC.2019.2958635
    [7] K. T. Atanassov, Interval-valued intuitionistic fuzzy sets, In: Intuitionistic Fuzzy Sets, Physica, Heidelberg, 35 (1999). https://doi.org/10.1007/978-3-7908-1870-3_2
    [8] F. Xiao, EFMCDM: Evidential fuzzy multicriteria decision making based on belief entropy, IEEE T. Fuzzy Syst., 28 (2019), 1477–1491. https://doi.org/10.1109/TFUZZ.2019.2936368 doi: 10.1109/TFUZZ.2019.2936368
    [9] R. R. Yager, Pythagorean membership grades in multi-criteria decision making, IEEE T. Fuzzy Syst., 22 (2013), 958–965. https://doi.org/10.1109/TFUZZ.2013.2278989 doi: 10.1109/TFUZZ.2013.2278989
    [10] M. J. Khan, M. I. Ali, P. Kumam, W. Kumam, M. Aslam, J. C. R. Alcantud, Improved generalized dissimilarity measure‐based VIKOR method for Pythagorean fuzzy sets, Int. J. Intell. Syst., 37 (2022), 1807–1845. https://doi.org/10.1002/int.22757 doi: 10.1002/int.22757
    [11] K. Rahman, S. Abdullah, R. Ahmed, M. Ullah, Pythagorean fuzzy Einstein weighted geometric aggregation operator and their application to multiple attribute group decision making, J. Intell. Fuzzy Syst., 33 (2017), 635–647. https://doi.org/10.3233/JIFS-16797 doi: 10.3233/JIFS-16797
    [12] C. Huang, M. Lin, Z. Xu, Pythagorean fuzzy MULTIMOORA method based on distance measure and score function: Its application in multicriteria decision making process, Knowl. Inform. Syst., 62 (2020), 4373–4406. https://doi.org/10.1007/s10115-020-01491-y doi: 10.1007/s10115-020-01491-y
    [13] X. Zhang, Z. Xu, Extension of TOPSIS to multiple criteria decision making with Pythagorean fuzzy sets, Int. J. Intell. Syst., 29 (2014), 1061–1078. https://doi.org/10.1002/int.21676 doi: 10.1002/int.21676
    [14] M. Lin, C. Huang, R. Chen, H. Fujita, X. Wang, Directional correlation coefficient measures for Pythagorean fuzzy sets: Their applications to medical diagnosis and cluster analysis, Complex Intell. Syst., 7 (2021), 1025–1043. https://doi.org/10.1007/s40747-020-00261-1 doi: 10.1007/s40747-020-00261-1
    [15] G. Wei, M. Lu, Pythagorean fuzzy power aggregation operators in multiple attribute decision making, Int. J. Intell. Syst., 33 (2018), 169–186. https://doi.org/10.1002/int.21946 doi: 10.1002/int.21946
    [16] M. Akram, K. Zahid, J.C. R. Alcantud, A new outranking method for multicriteria decision making with complex Pythagorean fuzzy information, Neural Comput. Appl., 34 (2022), 8069–8102. https://doi.org/10.1007/s00521-021-06847-1 doi: 10.1007/s00521-021-06847-1
    [17] F. Xiao, W. Ding, Divergence measure of Pythagorean fuzzy sets and its application in medical diagnosis, Appl. Soft Comput., 79 (2019), 254–267. https://doi.org/10.1016/j.asoc.2019.03.043 doi: 10.1016/j.asoc.2019.03.043
    [18] L. Wang, N. Li, Pythagorean fuzzy interaction power Bonferroni means aggregation operators in multiple attribute decision making, Int. J. Intell. Syst., 35 (2020), 150–183. https://doi.org/10.1002/int.22204 doi: 10.1002/int.22204
    [19] M. Lin, J. Wei, Z. Xu, R. Chen, Multiattribute group decision-making based on linguistic Pythagorean fuzzy interaction partitioned bonferroni mean aggregation operators, Complexity, 2018 (2018). https://doi.org/10.1155/2018/9531064 doi: 10.1155/2018/9531064
    [20] M. R. Khan, K. Ullah, Q. Khan, Multi-attribute decision-making using Archimedean aggregation operator in T-spherical fuzzy environment, Rep. Mech. Eng., 4 (2023), 18–38. https://doi.org/10.31181/rme20031012023k doi: 10.31181/rme20031012023k
    [21] M. Akram, F. Wasim, J. C. R. Alcantud, A. N. Al-Kenani, Multi-criteria optimization technique with complex Pythagorean fuzzy n-soft information, Int. J. Comput. Intell. Syst., 14 (2021), 1–24. https://doi.org/10.1007/s44196-021-00008-x doi: 10.1007/s44196-021-00008-x
    [22] X. Zhang, A novel approach based on similarity measure for Pythagorean fuzzy multiple criteria group decision making, Int. J. Intell. Syst., 31 (2016), 593–611. https://doi.org/10.1002/int.21796 doi: 10.1002/int.21796
    [23] M. Riaz, H. M. A. Farid, Picture fuzzy aggregation approach with application to third-party logistic provider selection process, Rep. Mech. Eng., 3 (2022), 227–236. https://doi.org/10.31181/rme20023062022r doi: 10.31181/rme20023062022r
    [24] M. Lin, X. Li, R. Chen, H. Fujita, J. Lin, Picture fuzzy interactional partitioned Heronian mean aggregation operators: an application to MADM process, Artif. Intell. Rev., 55 (2022), 1171–1208. https://doi.org/10.1007/s10462-021-09953-7 doi: 10.1007/s10462-021-09953-7
    [25] X. Peng, Y. Yang, Fundamental properties of interval‐valued Pythagorean fuzzy aggregation operators, Int. J. Intell. Syst., 31 (2016), 444–487. https://doi.org/10.1002/int.21790 doi: 10.1002/int.21790
    [26] M. Lin, C. Huang, Z. Xu, R. Chen, Evaluating IoT platforms using integrated probabilistic linguistic MCDM method, IEEE Internet Things, 7 (2020), 11195–11208. https://doi.org/10.1109/JIOT.2020.2997133 doi: 10.1109/JIOT.2020.2997133
    [27] K. Rahman, S. Abdullah, M. Shakeel, M. S. Ali Khan, M. Ullah, Interval-valued Pythagorean fuzzy geometric aggregation operators and their application to group decision making problem, Cogent Math., 4 (2017), 1338638. https://doi.org/10.1080/23311835.2017.1338638 doi: 10.1080/23311835.2017.1338638
    [28] M. Lin, X. Li, L. Chen, Linguistic q‐rung orthopair fuzzy sets and their interactional partitioned Heronian mean aggregation operators, Int. J. Intell. Syst., 35 (2020), 217–249. https://doi.org/10.1002/int.22136 doi: 10.1002/int.22136
    [29] D. Molodtsov, Soft set theory—first results, Comput. Math. Appl., 37 (1999), 19–31. https://doi.org/10.1016/S0898-1221(99)00056-5 doi: 10.1016/S0898-1221(99)00056-5
    [30] P. K. Maji, R. Biswas, A. R. Roy, Soft set theory, Comput. Math. Appl., 45 (2003), 555–562. https://doi.org/10.1016/S0898-1221(03)00016-6 doi: 10.1016/S0898-1221(03)00016-6
    [31] P. K. Maji, R. Biswas, A. R. Roy, Fuzzy soft sets, J. Fuzzy Math., 9 (2001), 589–602.
    [32] P. K. Maji, R. Biswas, A. R. Roy, Intuitionistic fuzzy soft sets, J. Fuzzy Math., 9 (2001), 677–692.
    [33] R. Arora, H. Garg, A robust aggregation operators for multi-criteria decision-making with intuitionistic fuzzy soft set environment, Sci. Iran., 25 (2018), 931–942.
    [34] A. K. Das, FP-intuitionistic multi fuzzy N-soft set and its induced FP-hesitant N soft set in decision-making, Decis. Mak. Appl. Manag. Eng., 5 (2022), 67–89. https://doi.org/10.31181/dmame181221045d doi: 10.31181/dmame181221045d
    [35] Y. Jiang, Y. Tang, Q. Chen, H. Liu, J. Tang, Interval-valued intuitionistic fuzzy soft sets and their properties, Comput. Math. Appl., 60 (2010), 906–918. https://doi.org/10.1016/j.camwa.2010.05.036 doi: 10.1016/j.camwa.2010.05.036
    [36] R. M. Zulqarnain, X. L. Xin, M. Saqlain, W. A. Khan, TOPSIS method based on the correlation coefficient of interval-valued intuitionistic fuzzy soft sets and aggregation operators with their application in decision-making, J. Math., 2021 (2021), 1–16. https://doi.org/10.1155/2021/6656858 doi: 10.1155/2021/6656858
    [37] X. D. Peng, Y. Yang, J. Song, Y. Jiang, Pythagorean fuzzy soft set and its application, Comput. Eng., 41 (2015), 224–229.
    [38] R. M. Zulqarnain, I. Siddique, A. Iampan, D. Baleanu, Aggregation operators for interval-valued Pythagorean fuzzy soft set with their application to solve multi-attribute group decision making problem, CMES-Comput. Model. Eng. Sci., 131 (2022), 1717–1750. https://doi.org/10.32604/cmes.2022.019408 doi: 10.32604/cmes.2022.019408
    [39] F. Smarandache, Extension of soft set to hypersoft set, and then to plithogenic hypersoft set, Neutrosophic Sets Sy., 22 (2018), 168–170.
    [40] A. U. Rahman, M. Saeed, H. A. E. W. Khalifa, W. A. Afifi, Decision making algorithmic techniques based on aggregation operations and similarity measures of possibility intuitionistic fuzzy hypersoft sets, AIMS Math., 7 (2022), 3866–3895. https://doi.org/10.3934/math.2022214 doi: 10.3934/math.2022214
    [41] A. U. Rahman, M. Saeed, S. S. Alodhaibi, H. Abd, Decision making algorithmic approaches based on parameterization of neutrosophic set under hypersoft set environment with fuzzy, intuitionistic fuzzy and neutrosophic settings, CMES-Comput. Model. Eng. Sci., 12 (2021), 743–777. https://doi.org/10.32604/cmes.2021.016736 doi: 10.32604/cmes.2021.016736
    [42] M. Saeed, M. Ahsan, A. U. Rahman, M. H. Saeed, A. Mehmood, An application of neutrosophic hypersoft mapping to diagnose brain tumor and propose appropriate treatment, J. Intell. Fuzzy Syst., 41 (2021), 1677–1699. https://doi.org/10.3233/JIFS-210482 doi: 10.3233/JIFS-210482
    [43] R. M. Zulqarnain, I. Siddique, R. Ali, D. Pamucar, D. Marinkovic, D. Bozanic, Robust aggregation operators for intuitionistic fuzzy hypersoft set with their application to solve MCDM problem, Entropy, 23 (2021), 688. https://doi.org/10.3390/e23060688 doi: 10.3390/e23060688
    [44] R. M. Zulqarnain, X. L. Xin, M. Saeed, A development of Pythagorean fuzzy hypersoft set with basic operations and decision-making approach based on the correlation coefficient, In: Theory and Application of Hypersoft Set, Pons Publishing House Brussels., 2021 (2021), 85–106.
    [45] R. M. Zulqarnain, I. Siddique, R. Ali, F. Jarad, A. Iampan, Aggregation operators for interval-valued Pythagorean fuzzy hypersoft set with their application to solve MCDM problem, CMES-Comput. Model. Eng. Sci., 135 (2022), 619–651. https://doi.org/10.32604/cmes.2022.022767 doi: 10.32604/cmes.2022.022767
    [46] K. Chatterjee, D. Pamucar, E. K. Zavadskas, Evaluating the performance of suppliers based on using the R'AMATELMAIRCA method for green supply chain implementation in electronics industry, J. Clean. Prod., 184 (2018), 101–129. https://doi.org/10.1016/j.jclepro.2018.02.186 doi: 10.1016/j.jclepro.2018.02.186
    [47] Z. Stevic, D. Pamucar, M. Vasiljevic, G. Stojic, S. Korica, Novel integrated multicriteria model for supplier selection: Case study construction company, Symmetry, 9 (2017), 279. https://doi.org/10.3390/sym9110279 doi: 10.3390/sym9110279
    [48] G. Stojic, Z. Stevic, J. Antucheviciene, D. Pamucar, M. Vasiljevic, A novel rough WASPAS approach for supplier selection in a company manufacturing PVC carpentry products, Information, 9 (2018), 121. https://doi.org/10.3390/info9050121 doi: 10.3390/info9050121
    [49] Z. Stevic, D. Pamucar, A. Puska, P. Chatterjee, Sustainable supplier selection in healthcare industries using a new MCDM method: Measurement of alternatives and ranking according to COmpromise solution (MARCOS), Comput. Ind. Eng., 140 (2020), 106231. https://doi.org/10.1016/j.cie.2019.106231 doi: 10.1016/j.cie.2019.106231
    [50] W. Wang, X. Liu, Y. Qin, Interval-valued intuitionistic fuzzy aggregation operators, J. Syst. Eng. Electron., 23 (2012), 574–580. https://doi.org/10.1109/JSEE.2012.00071 doi: 10.1109/JSEE.2012.00071
    [51] Z. Xu, J. Chen, On geometric aggregation over interval-valued intuitionistic fuzzy information, In: Fourth international conference on fuzzy systems and knowledge discovery (FSKD 2007), Haikou, China, 2007,466–471. https://doi.org/10.1109/FSKD.2007.427
    [52] S. Khan, H. F. Ashraf, Analysis of Pakistan's electric power sector, In: Blekinge Institute of Technology, Department of Electrical Engineering, 2015. Available from: https://www.diva-portal.org/smash/get/diva2: 917526/FULLTEXT01.pdf.
    [53] D. Anderson, F. Britt, D. Favre, The seven principles of supply chain management, Supply Chain Manag. Rev., 1 (1997), 21–31.
    [54] X. Y. Deng, Y. Hu, Y. Deng, S. Mahadevan, Supplier selection using AHP methodology extended by D numbers, Expert Syst. Appl., 41 (2014), 156–167. https://doi.org/10.1016/j.eswa.2013.07.018 doi: 10.1016/j.eswa.2013.07.018
    [55] G. W. Dickson, An analysis of vendor selection: Aystems and decisions, J. Purch., 1 (1966), 5–17. https://doi.org/10.1111/j.1745-493X.1966.tb00818.x doi: 10.1111/j.1745-493X.1966.tb00818.x
    [56] Y. Wind, P. E. Green, P. J. Robinson, The determinants of vendor selection: The evaluation function approach, J. Purch., 8 (1968), 29–41. https://doi.org/10.1111/j.1745-493X.1968.tb00592.x doi: 10.1111/j.1745-493X.1968.tb00592.x
    [57] W. Ho, X. Xu, P. K. Dey, Multi-criteria decision making approaches for supplier evaluation and selection: A literature review, Eur. J.. Oper. Res., 202 (2010), 16–24. https://doi.org/10.1016/j.ejor.2009.05.009 doi: 10.1016/j.ejor.2009.05.009
    [58] C. A. Weber, J. R. Current, W. C. Benton, Vendor selection criteria and methods, Eur. J.. Oper. Res., 50 (1991), 2–18. https://doi.org/10.1016/0377-2217(91)90033-R doi: 10.1016/0377-2217(91)90033-R
    [59] A. Amid, S. H. Ghodsypour, C. Brien, A weighted max-min model for fuzzy multiobjective supplier selection in a supply chain, Int. J. Prod. Econ., 131 (2011), 139–145. https://doi.org/10.1016/j.ijpe.2010.04.044 doi: 10.1016/j.ijpe.2010.04.044
    [60] F. Jolai, S. A. Yazdian, K. Shahanaghi, M. A. Khojasteh, Integrating fuzzy TOPSIS and multiperiod goal programming for purchasing multiple products from multiple suppliers, J. Purch. Supply Manag., 17 (2011), 42–53. https://doi.org/10.1016/j.pursup.2010.06.004 doi: 10.1016/j.pursup.2010.06.004
    [61] M. Sevkli, S. C. L. Koh, S. Zaim, M. Demirbag, E. Tatoglu, Hybrid analytical hierarchy process model for supplier selection, Ind. Manage. Data Syst., 108 (2008), 122–142. https://doi.org/10.1108/02635570810844124 doi: 10.1108/02635570810844124
    [62] A. Anastasiadis, S. Konstantinopoulos, G. Kondylis, G. A. Vokas, M. J. Salame, Carbon tax, system marginal price and environmental policies on smart microgrid operation, Manag. Environ. Qual., 29 (2018), 76–88. https://doi.org/10.1108/MEQ-11-2016-0086 doi: 10.1108/MEQ-11-2016-0086
    [63] K. Govindan, R. Sivakumar, Green supplier selection and order allocation in a low-carbon paper industry: Integrated multi-criteria heterogeneous decision-making and multiobjective linear programming approaches, Ann. Oper. Res., 238 (2016), 243–276. https://doi.org/10.1007/s10479-015-2004-4 doi: 10.1007/s10479-015-2004-4
    [64] J. D. Qin, X. W. Liu, W. Pedrycz, An extended TODIM multi-criteria group decision making method for green supplier selection in interval type-2 fuzzy environment, Eur. J. Oper. Res., 258 (2017), 626–638. https://doi.org/10.1016/j.ejor.2016.09.059 doi: 10.1016/j.ejor.2016.09.059
    [65] M. Davood, H. G. Seyed, H. Ashkan, A game theoretic analysis in capacity-constrained supplier selection and cooperation by considering the total supply chain inventory costs, Int. J. Prod. Econ., 181 (2016), 87–97. https://doi.org/10.1016/j.ijpe.2015.11.016 doi: 10.1016/j.ijpe.2015.11.016
    [66] X. Tong, Z. J. Wang, A group decision framework with intuitionistic preference relations and its application to low carbon supplier selection, Int. J. Environ. Res. Pub. He., 13 (2016), 923. https://doi.org/10.3390/ijerph13090923 doi: 10.3390/ijerph13090923
    [67] S. Zeng, X. Peng, T. Baležentis, D. Streimikiene, Prioritization of low-carbon suppliers based on Pythagorean fuzzy group decision making with self-confidence level, Econ. Res.-Ekon. Istraaz., 32 (2019), 1073–1087. https://doi.org/10.1080/1331677X.2019.1615971 doi: 10.1080/1331677X.2019.1615971
  • This article has been cited by:

    1. 佳轩 崔, 基于机器学习和深度学习的蛋白质结构预测研究进展, 2024, 1, 3007-7486, 32, 10.52810/FAAI.2024.003
    2. Jian Zhang, Jingjing Qian, Quan Zou, Feng Zhou, Lukasz Kurgan, 2025, Chapter 1, 978-1-0716-4212-2, 1, 10.1007/978-1-0716-4213-9_1
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1700) PDF downloads(44) Cited by(6)

Figures and Tables

Figures(4)  /  Tables(11)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog