
Colorectal cancer (CRC) is one of the most popular cancers among both men and women, with increasing incidence. The enhanced analytical load data from the pathology laboratory, integrated with described intra- and inter-variabilities through the calculation of biomarkers, has prompted the quest for robust machine-based approaches in combination with routine practice. In histopathology, deep learning (DL) techniques have been applied at large due to their potential for supporting the analysis and forecasting of medically appropriate molecular phenotypes and microsatellite instability. Considering this background, the current research work presents a metaheuristics technique with deep convolutional neural network-based colorectal cancer classification based on histopathological imaging data (MDCNN-C3HI). The presented MDCNN-C3HI technique majorly examines the histopathological images for the classification of colorectal cancer (CRC). At the initial stage, the MDCNN-C3HI technique applies a bilateral filtering approach to get rid of the noise. Then, the proposed MDCNN-C3HI technique uses an enhanced capsule network with the Adam optimizer for the extraction of feature vectors. For CRC classification, the MDCNN-C3HI technique uses a DL modified neural network classifier, whereas the tunicate swarm algorithm is used to fine-tune its hyperparameters. To demonstrate the enhanced performance of the proposed MDCNN-C3HI technique on CRC classification, a wide range of experiments was conducted. The outcomes from the extensive experimentation procedure confirmed the superior performance of the proposed MDCNN-C3HI technique over other existing techniques, achieving a maximum accuracy of 99.45%, a sensitivity of 99.45% and a specificity of 99.45%.
Citation: Abdullah S. AL-Malaise AL-Ghamdi, Mahmoud Ragab. Tunicate swarm algorithm with deep convolutional neural network-driven colorectal cancer classification from histopathological imaging data[J]. Electronic Research Archive, 2023, 31(5): 2793-2812. doi: 10.3934/era.2023141
[1] | Mashael S. Maashi, Yasser Ali Reyad Ali, Abdelwahed Motwakel, Amira Sayed A. Aziz, Manar Ahmed Hamza, Amgad Atta Abdelmageed . Anas platyrhynchos optimizer with deep transfer learning-based gastric cancer classification on endoscopic images. Electronic Research Archive, 2023, 31(6): 3200-3217. doi: 10.3934/era.2023162 |
[2] | Chetan Swarup, Kamred Udham Singh, Ankit Kumar, Saroj Kumar Pandey, Neeraj varshney, Teekam Singh . Brain tumor detection using CNN, AlexNet & GoogLeNet ensembling learning approaches. Electronic Research Archive, 2023, 31(5): 2900-2924. doi: 10.3934/era.2023146 |
[3] | Jinjiang Liu, Yuqin Li, Wentao Li, Zhenshuang Li, Yihua Lan . Multiscale lung nodule segmentation based on 3D coordinate attention and edge enhancement. Electronic Research Archive, 2024, 32(5): 3016-3037. doi: 10.3934/era.2024138 |
[4] | Yixin Sun, Lei Wu, Peng Chen, Feng Zhang, Lifeng Xu . Using deep learning in pathology image analysis: A novel active learning strategy based on latent representation. Electronic Research Archive, 2023, 31(9): 5340-5361. doi: 10.3934/era.2023271 |
[5] | Hui-Ching Wu, Yu-Chen Tu, Po-Han Chen, Ming-Hseng Tseng . An interpretable hierarchical semantic convolutional neural network to diagnose melanoma in skin lesions. Electronic Research Archive, 2023, 31(4): 1822-1839. doi: 10.3934/era.2023094 |
[6] | Yi Dong, Jinjiang Liu, Yihua Lan . A classification method for breast images based on an improved VGG16 network model. Electronic Research Archive, 2023, 31(4): 2358-2373. doi: 10.3934/era.2023120 |
[7] | Chengyong Yang, Jie Wang, Shiwei Wei, Xiukang Yu . A feature fusion-based attention graph convolutional network for 3D classification and segmentation. Electronic Research Archive, 2023, 31(12): 7365-7384. doi: 10.3934/era.2023373 |
[8] | Shixiong Zhang, Jiao Li, Lu Yang . Survey on low-level controllable image synthesis with deep learning. Electronic Research Archive, 2023, 31(12): 7385-7426. doi: 10.3934/era.2023374 |
[9] | Haijun Wang, Wenli Zheng, Yaowei Wang, Tengfei Yang, Kaibing Zhang, Youlin Shang . Single hyperspectral image super-resolution using a progressive upsampling deep prior network. Electronic Research Archive, 2024, 32(7): 4517-4542. doi: 10.3934/era.2024205 |
[10] | Ahmed Abul Hasanaath, Abdul Sami Mohammed, Ghazanfar Latif, Sherif E. Abdelhamid, Jaafar Alghazo, Ahmed Abul Hussain . Acute lymphoblastic leukemia detection using ensemble features from multiple deep CNN models. Electronic Research Archive, 2024, 32(4): 2407-2423. doi: 10.3934/era.2024110 |
Colorectal cancer (CRC) is one of the most popular cancers among both men and women, with increasing incidence. The enhanced analytical load data from the pathology laboratory, integrated with described intra- and inter-variabilities through the calculation of biomarkers, has prompted the quest for robust machine-based approaches in combination with routine practice. In histopathology, deep learning (DL) techniques have been applied at large due to their potential for supporting the analysis and forecasting of medically appropriate molecular phenotypes and microsatellite instability. Considering this background, the current research work presents a metaheuristics technique with deep convolutional neural network-based colorectal cancer classification based on histopathological imaging data (MDCNN-C3HI). The presented MDCNN-C3HI technique majorly examines the histopathological images for the classification of colorectal cancer (CRC). At the initial stage, the MDCNN-C3HI technique applies a bilateral filtering approach to get rid of the noise. Then, the proposed MDCNN-C3HI technique uses an enhanced capsule network with the Adam optimizer for the extraction of feature vectors. For CRC classification, the MDCNN-C3HI technique uses a DL modified neural network classifier, whereas the tunicate swarm algorithm is used to fine-tune its hyperparameters. To demonstrate the enhanced performance of the proposed MDCNN-C3HI technique on CRC classification, a wide range of experiments was conducted. The outcomes from the extensive experimentation procedure confirmed the superior performance of the proposed MDCNN-C3HI technique over other existing techniques, achieving a maximum accuracy of 99.45%, a sensitivity of 99.45% and a specificity of 99.45%.
Colorectal cancer (CRC) is one of the deadliest cancer types, with a high mortality rate accounting for 1.8 million cases annually across the globe [1]. With an increase in the number of colonoscopies being conducted, colorectal biopsies make up a higher percentage of histopathological laboratory tasks [2,3]. In recent times, artificial intelligence (AI) has advanced considerably in the domain of healthcare, which shows a high potential for medical application. Pathological analysis (surgical or biopsy excision) is the keystone of early CRC diagnosis [4]. Due to the emergence of this screening method, precursor lesions can now be identified and biopsied at earlier stages. Accordingly, extensive pre-malignant lesions have been recognized, and many diagnoses between the malignant and pre-malignant tumors are extremely complex [5,6]. Additional investigation methods are also followed with special in situ techniques such as in situ hybridization, immunohistochemistry and other molecular methods.
AI technology has advanced pathological techniques in recent years, and it has also maintained excellent clinical services up to now [7]. The critical assessment of histological slides by well-trained pathologists remains a benchmark for the diagnosis of cancer. With the increasing workload on pathologists, this manpower-intensive and time-consuming task has lately witnessed the emergence of computational pathology that is mainly enabled by whole slide images (WSIs). These WSIs are digital counterparts of the glass slides and have received authorization from the FDA for conducting medical diagnoses [8]. AI technology has been used to examine WSIs and produce computer-assisted diagnosis outcomes for cancer disease through the application of clinical image analysis. A recent study [9] focused on improving the quality of medical images. The quality of fundus images has been boosted by using an Archimedes optimization algorithm with Kapur's entropy. Though human pathologists outperform these AI systems, the outcomes are subjected to observer bias, fatigue and time constraints in medical settings. Intrinsically, CNNs have the advantage of unimpaired accuracy, yet it is subjected to the operational capability of its processing hardware [10].
The current research article presents a metaheuristics technique with deep convolutional neural network-based colorectal cancer classification based on histopathological imaging data (MDCNN-C3HI). The presented MDCNN-C3HI technique uses the bilateral filtering (BF) approach to get rid of the noise. The proposed MDCNN-C3HI technique uses an enhanced capsule network (ECN) with the Adam optimizer for the extraction of feature vectors. For CRC classification, the MDCNN-C3HI technique uses a deep learning modified neural network (DLMNN) classifier, whereas the tunicate swarm algorithm (TSA) is used to optimally adjust its hyperparameters. To demonstrate the enhanced performance of the MDCNN-C3HI technique on CRC classification, a wide range of experimental analyses was conducted.
Albashish [11] presented two ensemble learning approaches, such as E-CNN and E-CNN (product-rule). These approaches depend upon the variation of the pre-trained CNN techniques for the classification of colon cancer in histopathological images (HIs) into several classes. These ensembles primarily create the individuals by adjusting pre-trained techniques, such as VGG16, DenseNet121, InceptionV3 and MobileNetV2. The variations of these methods depend upon a blockwise fine-tuning strategy, whereas groups of dense and dropout layers of these pre-trained methods are combined to explore the difference from the histological images. In a study conducted earlier [12], a pre-trained neural network (AlexNet) was fine-tuned by changing four of its layers that were previously trained using the database. The primary classification outcomes were promising for every class of images, except one class. According to the model, to enhance the entire accuracy and retain the computational effectiveness of the model, instead of executing the image improvement systems on the whole database, the image quality of the underperforming class should be enhanced by using a fairly easy and effective contrast improvement system.
In the literature [13], both softmax and SVM models have been utilized for the classification of cancer HIs from the binary breast cancer database and a multi-class lung and colon cancer database. To achieve the optimum classification accuracy, this technique, which assigns the SVM technique to the fully connected layer of the softmax-based TL method was projected. Ragab et al. [14] proposed the DTLRO-HCBC technique in order to categorize the existence of breast cancer by using HIs. The presented system had a total of five succeeding blocks of layers with convolution, dropout and max-pooling layers present in each block. Ohata et al. [15] proposed the automatic detection of eight varieties of tissues established in CRC histopathological estimation. The authors executed the TL based on a CNN infrastructure. The authors adjusted the infrastructures of CNNs for the extraction of features from the images and fed it as input to the notable machine learning techniques.
In the literature [16], an artificial neural network model has been trained for the classification of CRC tissue image patches among eight class labels, where the patches were developed in an open database containing 5000 CRC HIs. An entire 532 multi-level patronymics feature, observed at distinct scales, was extracted by using visual descriptions, such as local binary patterns, a wavelet transform and Gabor filtering. Wang et al. [17] examined a new patch aggregation approach for medical CRC analysis with the help of weakly labeled pathological WSI patches; the proposed technique was analyzed on a massive dataset.
In the current paper, the authors propose a novel MDCNN-C3HI technique for CRC classification on HIs. The presented MDCNN-C3HI technique incorporates the BF approach for noise elimination. Next, an ECN with the Adam optimizer is exploited for feature extraction. For CRC classification, the proposed MDCNN-C3HI technique uses the DLMNN classifier, and its hyperparameters are optimally adjusted by the TSA. Figure 1 depicts the architecture of the proposed MDCNN-C3HI approach.
In the proposed MDCNN-C3HI technique, BF is employed to denoise the input images. Without the support of smooth edges, BF utilizes the spatial weight averaging process [18]. With the fusion of two Gaussian filters, it is determined in the intensity domain which one is effective. Both the spatial and intensity distance can be employed for weighting purposes. The BF outcome, at the p pixel place, can be determined by using the formula given below.
¯F(p)=1N∑zϵS(p)e−‖q−p‖22ε2e−|F(q)−F(p)|22E2SF(q) | (1) |
In Eq (1), S(p) signifies the pixel spatial neighborhood (p), N implies the normalized constant and εe and εr signify the parameters that govern the weight from the domain of intensities and spatial initiates.
e−‖q−p‖22ε2ee−|F(q)−F(p)|22ε2e | (2) |
BF is carried out in different processes, such as volumetric denoising, tone mapping, texture removal and other applications, namely, image denoising. For an original domain dimension, in a higher-dimensional space that performs the filtering process, the intensity of the signal remains higher. It can generate the fundamental conditions for the down‐sampling of the vital procedure and accelerate the procedure Using two simple nonlinear and augmented space. In this study, BF was utilized as a linear convolutional process.
To generate a collection of feature vectors, an ECN model was exploited in this study. A set of image pixels is labeled as a group of nerve cells in the current study [19]. Assume that Yi∈ healthy, tumor as the i-th output of the capsule, and weij indicates the weighted matrix, as follows:
ˆy(ji)=weijyi | (3) |
Now, ˆy(ji) corresponds to the recognition vector that provides the output parent j-th with the i-th capsules, and the pixel range is employed herewith to evaluate the quantity of weights. It gets enhanced when the pixel associates possible outcomes with the positive group or the value gets decreased. The softmax technique is employed in this study in the prior layer capsule, as well as in the potential parent. Figure 2 demonstrates the framework of the capsule network. The capsule is encoded as the coefficient cij in which the key logit bij demonstrates the log previous possibility of the i-th routing capsule in the prior layer to the j-th capsules in the subsequent layer. In general, the "routing‐by‐agreement" method is implemented by the logit of the capsule in every layer.
cij=ebij∑iebij | (4) |
The preceding layer illustrates a major element for the computation of the inputs of j-th parent capsules, as given below.
sj=∑icijˆy(j|i) | (5) |
The compressed pixel vector is determined within (0, 1) by a nonlinear technique named 'squashing', and it is evaluated as follows.
vaj=‖sj||21+‖sj‖2×sjε+||sj‖2 | (6) |
Here, ε=10−7. And, the succeeding layer capsule is attained as follows.
aij=vaj׈y(ji) | (7) |
Capsule classification can be considered by the margin loss (Lossk) in the class capsule k for the capsule network as follows:
Lossk=Tkmax(0,m+−‖vak‖)2+λ(1−Tk)max(0,‖vak‖−m−)2 | (8) |
To adjust the hyperparameters related to the ECN model, the Adam optimizer is utilized in this study. Adam optimization is different from the typical stochastic gradient descent approach, and it is utilized to upgrade the network weight in the trained database [20]. It is mostly established in the AdaGrad process and can simply act as a tunable method. It is a combination of AdaGrad and the momentum methods. In the variables such as w(t) and L(t), in which the index t implies the currently trained rounds, the parameters based on the Adam optimizer are upgraded as given below.
m(t+1)w←β1m(t)w+(1−β1)∇wL(t) | (9) |
v(t+1)w←β2v(t)w+(1−β2)(∇wL(t))2 | (10) |
ˆmw=m(t+1)w1−(β1)(t+1) | (11) |
ˆVw=v(t+1)w1−(β2)(t+1) | (12) |
wt+1←wt−ηˆm√ˆvw+∈ | (13) |
In Eqs (9) and (10), β1 and β2 correspond to the forgotten factor of the gradients and the 2nd moment of the gradients, respectively. In Eq (13), ∈ signifies a small scalar that is employed for the prevention of division by 0.
To classify the HIs for the identification and classification of CRC stages, a DLMNN model is used in this study. All of the input values are suggested to a distinct node, usually from the input area of classification [21]. The weight signifies the arbitrarily allocated values, and it is related to every input. The succeeding layer is identified as the hidden layer (HL). For the value of the random weight, the backpropagation (BP) technique is proposed to obtain the results for the optimization approach. Then, the activation function is established and the outcomes are taken forward to the sequential layer. This weight provides the maximum influence on the outcome of the classification. The algorithmic stages of the DLMNN technique are given below.
Step 1: Distribute the score values to select the features and match the weight values as follows.
Ej={E1,E2,E3…En}, | (14) |
Wi={W1,W2,W3…Wn}, | (15) |
Step 2: Multiply the input value by the weighted vector value, which can be arbitrarily selected. Then, the values are added together as given below.
R=n∑i=1EjWj, | (16) |
where Wi denotes the weight values, Ei —denotes the entropy values and R denotes the summed value.
Step 3: Evaluate the activation function (AFi).
AFi=f(n∑i=1EiWi), | (17) |
Step 4: Estimate the results of the HLs.
Yi=Ai+∑GiWi, | (18) |
where Wi denotes the weight between the input and the HLs, Ai denotes the bias value and Gi denotes the values that are altered by the application of AF.
Step 5: Re‐execute Steps 1–3 for all layers of the DLMNN approach. Eventually, the resultant unit is evaluated by summing all of the input signals weighted to attain the outcome layer neuron value.
Oui=Aj+∑PiWj, | (19) |
where Wj is the weight of the HL, Pi is the value of the layers that produce the resultant one and Ouiis the outcome unit.
Step 6: Compare the network outcome with that of the objective value. The variance between these two values produces error signals. This value is mathematically defined as follows.
Er=Tai−Oui, | (20) |
where Tai is the target outcome, Er is the error signal and Oui is the classification of the current output.
Step 7: At this moment, the outcome unit weighs the objective value. Accordingly, the relative error is defined. By this error, the value of δi is measured and employed to assign the error at the outcomes back to other units in the network:
δi=Er[f(Oui)], | (21) |
Step 8: The weighted correction is determined based on the BP technique, for which the equation is as follows:
wci=βδj(Ej), | (22) |
where δi denotes the error that gets distributed from the network, wci denotes the weighted correction, Ei denotes the input vector and β is the momentum term.
Finally, the TSA method is applied for the hyperparameter tuning process of the DLMNN model. Owing to the continual deepening of the model, the number of parameters of deep learning (DL) models also increases quickly, which results in model overfitting. At the same time, different hyperparameters have a significant impact on the efficiency of the CNN model. Particularly, hyperparameters such as the epoch count, batch size and learning rate selection are essential to attaining effectual outcomes. Since the trial-and-error method for hyperparameter tuning is a tedious and erroneous process, metaheuristic algorithms can be applied. Therefore, in this work, we employ the TSA algorithm for the parameter selection of the DLMNN model. In this TSA method, the two tunicate strategies are recognized as jet propulsion and SI, which are exploited to improve the function [22]. Tunicates can identify three significant conditions, such as moving near an optimal search agent place, avoiding a conflict with other search agents and keeping residual near an optimal search agent. This performance is defined by the mathematical model. To prevent conflicts with other search agents (particularly, other tunicates), vector →A is applied to calculate the novel location of the search agent.
→A=→G→M | (23) |
Here, the vector →G, i.e., the force of gravity, is obtained based on the following expression.
→G=c2+c3−→F | (24) |
Now, regarding the →F vector, the water flow vector from the ocean depth is obtained by using the following expression:
→F=2.c1 | (25) |
Here, the c1,c2, and c3 variables signify the random values within [0, 1]. At last, the vector M represents the social force among the searching agents that is computed by using the following expression:
→M=⌊Pmin+c1.Pmax−Pmin | (26) |
Here, Pmax and Pmin respectively suggest the initial and secondary velocity values that generate social interaction. In the TSA algorithm, the Pmax and Pmin values are considered as 1 and 4, correspondingly. Then, the novel location of the agent is searched and moved to the optimal neighbor as demonstrated herein.
→PD=|→FS−rand().Pp→(x)| | (27) |
Here, x indicates the current repeat value, →FS indicates the optimal placement of the food source and →Pp(x) signifies the tunicate's location. Then, the search agent converges toward the optimal search agent, while it preserves its place with reference to the optimal search agent, being the food source.
→Pp(x)={→FS+→A.→PD,ifrand≥0.5→FS−→A.→PD,ifrand<0.5 | (28) |
Now, →Pp(x) shows the upgraded location of the tunicate based on the location of the food source, i.e., FS. Finally, in the mathematical modeling of swarm performance, two optimal positions have been retained and the location of the additional searching agents is upgraded based on these two solutions.
Pp(→x+1)=→Pp(x)+Pp(→x+1)2+c1 | (29) |
Algorithm 1: Pseudocode of TSA |
Input: Tunicate population →Pp(x) Output: Optimal fitness value →FS Process TSA Initialize the variables →A,→→G,F,→M and maxiterations Set Pmin←1 Set Pmax←4 Set Swarm←0 While(x<Maxiterations) do for i←1to2, do →FS← Compute Fitness(→Pp(x)) c1,c2,c3, rand←Rand() →M←⌊Pmin+c1×Pmax−Pmin⌋ →F←2×c1 →G←c2+c3−→F →A←→G/→G →PD←|(→PS−rand×→Pp(x))| If (rand ≤0.5) then Swarm←Swarm+→FS+→AׯPD 1 Else Swarm←Swarm+→FS−→AׯPD End if End for →Pp(x)←Swarm/(2+c1) Swarm←0 Upgrade the parameters →A,→G,→F and ¯M 24: χ←x+1 End while Return →FS End Process Process Compute Fitness (Pp(x)→) for i←1 to n do FITp[i]←FitnessFunction End for FITpbest←BEST) 33: retum FITpbest End Process Process BEST(FITp) Best←FITp[0] fori←1ton do If (FITp[i]<Best) then Best←FITp[i] End if End for Return optimal fitness value End Process |
The proposed model was simulated by using Python 3.6.5 on PC i5-8600k, GeForce 1050Ti 4GB, 16GB RAM, 250GB SSD and 1TB HDD. The parameter settings are given as follows: learning rate: 0.01, dropout: 0.5, batch size: 5, epoch count: 50 and activation function: ReLU. In this section, the CRC classification performance of the MDCNN-C3HI method was evaluated by using the Warwick-QU dataset [23,24]. This dataset contains 165 images with two classes, namely, malignant tumor (MT) and benign tumor (BT), as shown in Table 1. For experimental validation, tenfold cross-validation is used.
Classes | Image Count |
BT | 74 |
MT | 91 |
Total Number of Images | 165 |
The confusion matrices, as generated by the proposed MDCNN-C3HI technique for the CRC classification process, are shown in Figure 3. The results exemplify that the proposed MDCNN-C3HI model appropriately distinguished the images as a BT or MT for every epoch case.
Table 2 and Figure 4 demonstrate the average CRC classification outcomes of the MDCNN-C3HI technique for varying epoch counts. The outcomes show that the proposed MDCNN-C3HI model appropriately recognized the images as a BT or MT for each epoch count. For example, at 100 epochs, the MDCNN-C3HI method achieved an average accubal of 96.20%.
Class | Accuracy | Precision | Sensitivity | Specificity | F-Score | MCC |
EPOCH_(100) | ||||||
BT | 94.59 | 97.22 | 94.59 | 97.80 | 95.89 | 92.66 |
MT | 97.80 | 95.70 | 97.80 | 94.59 | 96.74 | 92.66 |
Average | 96.20 | 96.46 | 96.20 | 96.20 | 96.31 | 92.66 |
EPOCH_(200) | ||||||
BT | 97.30 | 100.00 | 97.30 | 100.00 | 98.63 | 97.57 |
MT | 100.00 | 97.85 | 100.00 | 97.30 | 98.91 | 97.57 |
Average | 98.65 | 98.92 | 98.65 | 98.65 | 98.77 | 97.57 |
Class | Accuracy | Precision | Sensitivity | Specificity | F-Score | MCC |
EPOCH_(300) | ||||||
BT | 94.59 | 100.00 | 94.59 | 100.00 | 97.22 | 95.19 |
MT | 100.00 | 95.79 | 100.00 | 94.59 | 97.85 | 95.19 |
Average | 97.30 | 97.89 | 97.30 | 97.30 | 97.54 | 95.19 |
EPOCH_(400) | ||||||
BT | 100.00 | 98.67 | 100.00 | 98.90 | 99.33 | 98.78 |
MT | 98.90 | 100.00 | 98.90 | 100.00 | 99.45 | 98.78 |
Average | 99.45 | 99.33 | 99.45 | 99.45 | 99.39 | 98.78 |
EPOCH_(500) | ||||||
BT | 95.95 | 94.67 | 95.95 | 95.60 | 95.30 | 91.44 |
MT | 95.60 | 96.67 | 95.60 | 95.95 | 96.13 | 91.44 |
Average | 95.78 | 95.67 | 95.78 | 95.78 | 95.72 | 91.44 |
EPOCH_(600) | ||||||
BT | 100.00 | 96.10 | 100.00 | 96.70 | 98.01 | 96.40 |
MT | 96.70 | 100.00 | 96.70 | 100.00 | 98.32 | 96.40 |
Average | 98.35 | 98.05 | 98.35 | 98.35 | 98.17 | 96.40 |
EPOCH_(700) | ||||||
BT | 97.30 | 98.63 | 97.30 | 98.90 | 97.96 | 96.33 |
MT | 98.90 | 97.83 | 98.90 | 97.30 | 98.36 | 96.33 |
Average | 98.10 | 98.23 | 98.10 | 98.10 | 98.16 | 96.33 |
EPOCH_(800) | ||||||
BT | 94.59 | 97.22 | 94.59 | 97.80 | 95.89 | 92.66 |
MT | 97.80 | 95.70 | 97.80 | 94.59 | 96.74 | 92.66 |
Average | 96.20 | 96.46 | 96.20 | 96.20 | 96.31 | 92.66 |
EPOCH_(900) | ||||||
BT | 100.00 | 98.67 | 100.00 | 98.90 | 99.33 | 98.78 |
MT | 98.90 | 100.00 | 98.90 | 100.00 | 99.45 | 98.78 |
Average | 99.45 | 99.33 | 99.45 | 99.45 | 99.39 | 98.78 |
EPOCH_(1000) | ||||||
BT | 100.00 | 97.37 | 100.00 | 97.80 | 98.67 | 97.59 |
MT | 97.80 | 100.00 | 97.80 | 100.00 | 98.89 | 97.59 |
Average | 98.90 | 98.68 | 98.90 | 98.90 | 98.78 | 97.59 |
EPOCH_(1100) | ||||||
BT | 100.00 | 98.67 | 100.00 | 98.90 | 99.33 | 98.78 |
MT | 98.90 | 100.00 | 98.90 | 100.00 | 99.45 | 98.78 |
Average | 99.45 | 99.33 | 99.45 | 99.45 | 99.39 | 98.78 |
EPOCH_(1200) | ||||||
BT | 97.30 | 100.00 | 97.30 | 100.00 | 98.63 | 97.57 |
MT | 100.00 | 97.85 | 100.00 | 97.30 | 98.91 | 97.57 |
Average | 98.65 | 98.92 | 98.65 | 98.65 | 98.77 | 97.57 |
Simultaneously, at 200 epochs, the proposed MDCNN-C3HI technique achieved an average accubal of 98.65%. Concurrently, at 400 epochs, the MDCNN-C3HI approach produced an average accubal of 99.45%. Along with that, at 1000 epochs, the proposed MDCNN-C3HI method achieved an average accubal of 98.90%. Finally, at 1200 epochs, the presented MDCNN-C3HI technique achieved an average accubal of 98.65%.
Both the TACC and VACC values from the proposed MDCNN-C3HI technique were investigated in terms of CRC classification performance; the results are presented in Figure 5. The figure implies that the proposed MDCNN-C3HI technique demonstrated enhanced performance, achieving the maximum TACC and VACC values. It is to be noted that the proposed MDCNN-C3HI approach yielded the highest TACC outcomes.
The TLS and VLS values for the proposed MDCNN-C3HI technique were determined upon CRC classification performance; the results are portrayed in Figure 6. The figure infers that the proposed MDCNN-C3HI system resulted in improved performance, achieving the minimum TLS and VLS values. It is visible that the proposed MDCNN-C3HI technique achieved low VLS outcomes.
Table 3 demonstrates the comparative CRC classification analysis outcomes between the proposed MDCNN-C3HI method and other existing techniques [25].
Method | Accuy | Sensy | Specy |
MDCNN-C3HI | 99.45 | 99.45 | 99.45 |
ResNet-18 | 92.09 | 97.02 | 84.82 |
SC-CNN | 81.93 | 92.02 | 93.81 |
CP-CNN | 87.07 | 96.85 | 84.07 |
AAI-CCDC | 90.52 | 93.91 | 93.07 |
VGG-16 | 81.82 | 85.11 | 89.26 |
Inception | 84.46 | 91.77 | 93.92 |
Figure 7 shows the accuy assessment outcomes of the proposed MDCNN-C3HI and other DL models. The results demonstrate that the VGG-16 model and the SC-CNN model yielded low accuy values, such as 81.82% and 81.93%, respectively. Then, the Inception and the CP-CNN models achieved slightly improved accuy values, i.e., 84.46% and 87.07%, respectively. Meanwhile, the AAI-CCDC and the ResNet-18 models achieved closer performance, with their accuy results being 90.52% and 92.09%, respectively. However, the proposed MDCNN-C3HI technique achieved stellar performance, with a maximum accuy of 99.45%.
Figure 8 shows the sensy analysis outcomes achieved by the proposed MDCNN-C3HI and other DL models. The outcomes demonstrate that the VGG-16 model and the Inception model yielded low sensy values of 85.11% and 91.77%, correspondingly. Then, the SC-CNN model and the AAI-CCDC model achieved slightly enhanced sensy values of 92.02% and 93.91%, correspondingly. Meanwhile, the CP-CNN and the ResNet-18 models achieved closer performance, with their sensy values being 96.85% and 97.02%, correspondingly. However, the proposed MDCNN-C3HI technique achieved superior performance, with the highest sensy of 99.45%.
Figure 9 portrays the specy examination outcomes accomplished by the proposed MDCNN-C3HI and other DL models. The outcomes demonstrate that the CP-CNN model and the ResNet-18 model achieved low specy values of 84.07% and 84.82%, correspondingly. Next, the VGG-16 model and the AAI-CCDC model achieved slightly enhanced specy values of 89.26% and 93.07%, correspondingly. Meanwhile, the SC-CNN model and the Inception model exhibited closer performance, with their specy values being 93.81% and 93.92%, correspondingly. However, the proposed MDCNN-C3HI technique achieved excellent performance, with a maximum specy of 99.45%.
Finally, a brief CT examination was conducted between the proposed MDCNN-C3HI and other current models; the results are shown in Figure 10. The table values confirm that the proposed MDCNN-C3HI model achieved an effective outcome with a minimal CT of 0.38 s. In contrast, the rest of the models, i.e., ResNet-18, SC-CNN, CP-CNN, AAI-CCDC, VGG-16 and Inception, yielded high CT values of 0.60, 0.55, 1.10, 1.20, 0.50 and 0.76 s, respectively. These results confirm the effectual characteristics of the proposed MDCNN-C3HI model for CRC classification.
In this study, the authors developed a new MDCNN-C3HI technique for CRC classification based on HIs. The presented MDCNN-C3HI technique makes use of the BF approach for noise elimination. Next, an ECN with the Adam optimizer is exploited for feature extraction. For CRC classification, the proposed MDCNN-C3HI technique uses the DLMNN classifier, and its hyperparameters are optimally adjusted by TSA. To demonstrate the enhanced performance of the proposed MDCNN-C3HI technique on CRC classification, a wide range of experimental analyses was conducted. The extensive experimentation outcomes confirmed the superior performance of the proposed MDCNN-C3HI technique compared to other existing techniques. Thus, the proposed MDCNN-C3HI technique can be used as a proficient approach for CRC classification. In the future, an ensemble fusion-based DL model can be developed to improve the performance of the MDCNN-C3HI technique.
This research work was funded by Institutional Fund Projects under grant no. (IFPHI: 243-611-2020). Therefore, the authors gratefully acknowledge the technical and financial support provided by the Ministry of Education and Deanship of Scientific Research (DSR) at King Abdulaziz University (KAU), Jeddah, Saudi Arabia. The authors, therefore, gratefully acknowledge the DSR for their technical and financial support.
The authors declare that there is no conflict of interest.
[1] |
A. Mitsala, C. Tsalikidis, M. Pitiakoudis, C. Simopoulos, A. K. Tsaroucha, Artificial intelligence in colorectal cancer screening, diagnosis and treatment. A new era, Curr. Oncol., 28 (2021), 1581–1607. https://doi.org/10.3390/curroncol28030149 doi: 10.3390/curroncol28030149
![]() |
[2] |
C. Ho, Z. Zhao, X. F. Chen, J. Sauer, S. A. Saraf, R. Jialdasani, et al., A promising deep learning-assistive algorithm for histopathological screening of colorectal cancer, Sci. Rep., 12 (2022), 1–9. https://doi.org/10.1038/s41598-022-06264-x doi: 10.1038/s41598-022-06264-x
![]() |
[3] |
D. Sarwinda, R. H. Paradisa, A. Bustamam, P. Anggia, Deep learning in image classification using residual network (ResNet) variants for detection of colorectal cancer, Procedia Comput. Sci., 179 (2021), 423–431. https://doi.org/10.1016/j.procs.2021.01.025 doi: 10.1016/j.procs.2021.01.025
![]() |
[4] |
S. Javed, A. Mahmood, M. M. Fraz, N. A. Koohbanani, K. Benes, Y. W. Tsang, et al., Cellular community detection for tissue phenotyping in colorectal cancer histology images, Med. Image Anal., 63 (2020), 101696. https://doi.org/10.1016/j.media.2020.101696 doi: 10.1016/j.media.2020.101696
![]() |
[5] |
M. Masud, N. Sikder, A. A. Nahid, A. K. Bairagi, M. A. AlZain, A machine learning approach to diagnosing lung and colon cancer using a deep learning-based classification framework, Sensors, 21 (2021), 748. https://doi.org/10.3390/s21030748 doi: 10.3390/s21030748
![]() |
[6] |
N. Lorenzovici, E. H. Dulf, T. Mocan, L. Mocan, Artificial intelligence in colorectal cancer diagnosis using clinical data: Non-invasive approach, Diagnostics, 11 (2021), 514. https://doi.org/10.3390/diagnostics11030514 doi: 10.3390/diagnostics11030514
![]() |
[7] |
C. Zhou, Y. Jin, Y. Chen, S. Huang, R. Huang, Y. Wang, et al., Histopathology classification and localization of colorectal cancer using global labels by weakly supervised deep learning, Comput. Med. Imaging Graphics, 88 (2021), 101861. https://doi.org/10.1016/j.compmedimag.2021.101861 doi: 10.1016/j.compmedimag.2021.101861
![]() |
[8] |
M. J. Tsai, Y. H. Tao, Deep learning techniques for the classification of colorectal cancer tissue, Electronics, 10 (2021), 1662. https://doi.org/10.3390/electronics10141662 doi: 10.3390/electronics10141662
![]() |
[9] |
M. Ragab, W. H. Aljedaibi, A. F. Nahhas, I. R. Alzahrani, Computer aided diagnosis of diabetic retinopathy grading using spiking neural network, Comput. Electr. Eng., 101 (2022), 108014. https://doi.org/10.1016/j.compeleceng.2022.108014 doi: 10.1016/j.compeleceng.2022.108014
![]() |
[10] |
M. Mulenga, S. A. Kareem, A. Q. M. Sabri, M. Seera, S. Govind, C. Samudi, et al., Feature extension of gut microbiome data for deep neural network-based colorectal cancer classification, IEEE Access, 9 (2021), 23565–23578. https://doi.org/10.1109/ACCESS.2021.3050838 doi: 10.1109/ACCESS.2021.3050838
![]() |
[11] |
D. Albashish, Ensemble of adapted convolutional neural networks (CNN) methods for classifying colon histopathological images, PeerJ Comput. Sci., 8 (2022), e1031. https://doi.org/10.7717/peerj-cs.1031 doi: 10.7717/peerj-cs.1031
![]() |
[12] |
S. Mehmood, T. M. Ghazal, M. A. Khan, M. Zubair, M. T. Naseem, T. Faiz, et al., Malignancy detection in lung and colon histopathology images using transfer learning with class selective image processing, IEEE Access, 10 (2022), 25657–25668. https://doi.org/10.1109/ACCESS.2022.3150924 doi: 10.1109/ACCESS.2022.3150924
![]() |
[13] |
J. Fan, J. Lee, Y. Lee, A transfer learning architecture based on a support vector machine for histopathology image classification, Appl. Sci., 11 (2021), 6380. https://doi.org/10.3390/app11146380 doi: 10.3390/app11146380
![]() |
[14] |
M. Ragab, A. F. Nahhas, Optimal deep transfer learning model for histopathological breast cancer classification, CMC-Comput. Mater. Continua, 73 (2022), 2849–2864. https://doi.org/10.32604/cmc.2022.028855 doi: 10.32604/cmc.2022.028855
![]() |
[15] |
E. F. Ohata, J. V. S. D. Chagas, G. M. Bezerra, M. M. Hassan, V. H. C. de Albuquerque, A novel transfer learning approach for the classification of histological images of colorectal cancer, J. Supercomput., 77 (2021), 9494–9519. https://doi.org/10.1007/s11227-020-03575-6 doi: 10.1007/s11227-020-03575-6
![]() |
[16] |
E. Trivizakis, G. S. Ioannidis, I. Souglakos, A. H. Karantanas, M. Tzardi, K. Marias, A neural pathomics framework for classifying colorectal cancer histopathology images based on wavelet multi-scale texture analysis, Sci. Rep., 11 (2021), 1–10. https://doi.org/10.1038/s41598-021-94781-6 doi: 10.1038/s41598-021-94781-6
![]() |
[17] |
K. S. Wang, G. Yu, C. Xu, X. H. Meng, J. Zhou, C. Zheng, et al., Accurate diagnosis of colorectal cancer based on histopathology images using artificial intelligence, BMC Med., 19 (2021), 1–12. https://doi.org/10.1186/s12916-021-01942-5 doi: 10.1186/s12916-021-01942-5
![]() |
[18] |
S. Singh, H. Singh, A. Gehlot, IR and visible image fusion using DWT and bilateral filter, Microsyst. Technol., 2022 (2022), 1–11. https://doi.org/10.1007/s00542-022-05315-7 doi: 10.1007/s00542-022-05315-7
![]() |
[19] |
B. Jia, Q. Huang, DE-CapsNet: A diverse enhanced capsule network with disperse dynamic routing, Appl. Sci., 10 (2020), 884. https://doi.org/10.3390/app10030884 doi: 10.3390/app10030884
![]() |
[20] |
K. K. Chandriah, R. V. Naraganahalli, RNN/LSTM with modified Adam optimizer in deep learning approach for automobile spare parts demand forecasting, Multimedia Tools Appl., 80 (2021), 26145–26159. https://doi.org/10.1007/s11042-021-10913-0 doi: 10.1007/s11042-021-10913-0
![]() |
[21] |
B. Muthu, S. Cb, P. M. Kumar, S. N. Kadry, C. H. Hsu, O. Sanjuan, et al., A framework for extractive text summarization based on deep learning modified neural network classifier, ACM Trans. Asian Low-Resour. Lang. Inf. Process., 20 (2021), 1–20. https://doi.org/10.1145/3392048 doi: 10.1145/3392048
![]() |
[22] |
E. H. Houssein, B. E. D. Helmy, A. A. Elngar, D. S. Abdelminaam, H. Shaban, An improved tunicate swarm algorithm for global optimization and image segmentation, IEEE Access, 9 (2021), 56066–56092. https://doi.org/10.1109/ACCESS.2021.3072336 doi: 10.1109/ACCESS.2021.3072336
![]() |
[23] | Warwick Tissue Image Analytics (TIA) Centre. Available from: www.warwick.ac.uk/fac/sci/dcs/research/tia/glascontest/download. |
[24] |
K. Sirinukunwattana, D. R. J. Snead, N. M. Rajpoot, A stochastic polygons model for glandular structures in colon histology images, IEEE Trans. Med. Imaging, 34 (2015), 2366–2378. https://doi.org/10.1109/TMI.2015.2433900 doi: 10.1109/TMI.2015.2433900
![]() |
[25] |
J. Escorcia-Gutierrez, M. Gamarra, P. P. Ariza-Colpas, G. B. Roncallo, N. Leal, R. Soto-Diaz, et al., Galactic swarm optimization with deep transfer learning driven colorectal cancer classification for image guided intervention, Comput. Electr. Eng., 104 (2022), 108462. https://doi.org/10.1016/j.compeleceng.2022.108462 doi: 10.1016/j.compeleceng.2022.108462
![]() |
1. | Rayed AlGhamdi, Mitotic Nuclei Segmentation and Classification Using Chaotic Butterfly Optimization Algorithm with Deep Learning on Histopathology Images, 2023, 8, 2313-7673, 474, 10.3390/biomimetics8060474 | |
2. | Turki Althaqafi, Mathematical modeling of a Hybrid Mutated Tunicate Swarm Algorithm for Feature Selection and Global Optimization, 2024, 9, 2473-6988, 24336, 10.3934/math.20241184 | |
3. | Mahmoud Ragab, Hybrid firefly particle swarm optimisation algorithm for feature selection problems, 2024, 41, 0266-4720, 10.1111/exsy.13363 | |
4. | Mobina Fathi, Kimia Vakili, Ramtin Hajibeygi, Ashkan Bahrami, Shima Behzad, Armin Tafazolimoghadam, Hadiseh Aghabozorgi, Reza Eshraghi, Vivek Bhatt, Ali Gholamrezanezhad, Cultivating diagnostic clarity: The importance of reporting artificial intelligence confidence levels in radiologic diagnoses, 2024, 08997071, 110356, 10.1016/j.clinimag.2024.110356 | |
5. | Rayed AlGhamdi, Turky Omar Asar, Fatmah Y. Assiri, Rasha A. Mansouri, Mahmoud Ragab, Al-Biruni Earth Radius Optimization with Transfer Learning Based Histopathological Image Analysis for Lung and Colon Cancer Detection, 2023, 15, 2072-6694, 3300, 10.3390/cancers15133300 | |
6. | Abdulkream A. Alsulami, Aishah Albarakati, Abdullah AL-Malaise AL-Ghamdi, Mahmoud Ragab, Identification of Anomalies in Lung and Colon Cancer Using Computer Vision-Based Swin Transformer with Ensemble Model on Histopathological Images, 2024, 11, 2306-5354, 978, 10.3390/bioengineering11100978 | |
7. | Moneerah Alotaibi, Amal Alshardan, Mashael Maashi, Mashael M. Asiri, Sultan Refa Alotaibi, Ayman Yafoz, Raed Alsini, Alaa O. Khadidos, Exploiting histopathological imaging for early detection of lung and colon cancer via ensemble deep learning model, 2024, 14, 2045-2322, 10.1038/s41598-024-71302-9 | |
8. | Amal Alshardan, Nazir Ahmad, Achraf Ben Miled, Asma Alshuhail, Yazeed Alzahrani, Ahmed Mahmud, Transferable deep learning with coati optimization algorithm based mitotic nuclei segmentation and classification model, 2024, 14, 2045-2322, 10.1038/s41598-024-80002-3 | |
9. | Xueping Tan, Dinghui Wu, Hao Wang, Zihao Zhao, Yuxi Ge, Shudong Hu, MMCAF: A Survival Status Prediction Method Based on Cross‐Attention Fusion of Multimodal Colorectal Cancer Data, 2025, 35, 0899-9457, 10.1002/ima.70051 | |
10. | Shuihua Wang, Yudong Zhang, Grad-CAM: Understanding AI Models, 2023, 76, 1546-2226, 1321, 10.32604/cmc.2023.041419 | |
11. | Rong Zheng, Abdelazim G. Hussien, Anas Bouaouda, Rui Zhong, Gang Hu, A Comprehensive Review of the Tunicate Swarm Algorithm: Variations, Applications, and Results, 2025, 1134-3060, 10.1007/s11831-025-10228-5 |
Classes | Image Count |
BT | 74 |
MT | 91 |
Total Number of Images | 165 |
Class | Accuracy | Precision | Sensitivity | Specificity | F-Score | MCC |
EPOCH_(100) | ||||||
BT | 94.59 | 97.22 | 94.59 | 97.80 | 95.89 | 92.66 |
MT | 97.80 | 95.70 | 97.80 | 94.59 | 96.74 | 92.66 |
Average | 96.20 | 96.46 | 96.20 | 96.20 | 96.31 | 92.66 |
EPOCH_(200) | ||||||
BT | 97.30 | 100.00 | 97.30 | 100.00 | 98.63 | 97.57 |
MT | 100.00 | 97.85 | 100.00 | 97.30 | 98.91 | 97.57 |
Average | 98.65 | 98.92 | 98.65 | 98.65 | 98.77 | 97.57 |
Class | Accuracy | Precision | Sensitivity | Specificity | F-Score | MCC |
EPOCH_(300) | ||||||
BT | 94.59 | 100.00 | 94.59 | 100.00 | 97.22 | 95.19 |
MT | 100.00 | 95.79 | 100.00 | 94.59 | 97.85 | 95.19 |
Average | 97.30 | 97.89 | 97.30 | 97.30 | 97.54 | 95.19 |
EPOCH_(400) | ||||||
BT | 100.00 | 98.67 | 100.00 | 98.90 | 99.33 | 98.78 |
MT | 98.90 | 100.00 | 98.90 | 100.00 | 99.45 | 98.78 |
Average | 99.45 | 99.33 | 99.45 | 99.45 | 99.39 | 98.78 |
EPOCH_(500) | ||||||
BT | 95.95 | 94.67 | 95.95 | 95.60 | 95.30 | 91.44 |
MT | 95.60 | 96.67 | 95.60 | 95.95 | 96.13 | 91.44 |
Average | 95.78 | 95.67 | 95.78 | 95.78 | 95.72 | 91.44 |
EPOCH_(600) | ||||||
BT | 100.00 | 96.10 | 100.00 | 96.70 | 98.01 | 96.40 |
MT | 96.70 | 100.00 | 96.70 | 100.00 | 98.32 | 96.40 |
Average | 98.35 | 98.05 | 98.35 | 98.35 | 98.17 | 96.40 |
EPOCH_(700) | ||||||
BT | 97.30 | 98.63 | 97.30 | 98.90 | 97.96 | 96.33 |
MT | 98.90 | 97.83 | 98.90 | 97.30 | 98.36 | 96.33 |
Average | 98.10 | 98.23 | 98.10 | 98.10 | 98.16 | 96.33 |
EPOCH_(800) | ||||||
BT | 94.59 | 97.22 | 94.59 | 97.80 | 95.89 | 92.66 |
MT | 97.80 | 95.70 | 97.80 | 94.59 | 96.74 | 92.66 |
Average | 96.20 | 96.46 | 96.20 | 96.20 | 96.31 | 92.66 |
EPOCH_(900) | ||||||
BT | 100.00 | 98.67 | 100.00 | 98.90 | 99.33 | 98.78 |
MT | 98.90 | 100.00 | 98.90 | 100.00 | 99.45 | 98.78 |
Average | 99.45 | 99.33 | 99.45 | 99.45 | 99.39 | 98.78 |
EPOCH_(1000) | ||||||
BT | 100.00 | 97.37 | 100.00 | 97.80 | 98.67 | 97.59 |
MT | 97.80 | 100.00 | 97.80 | 100.00 | 98.89 | 97.59 |
Average | 98.90 | 98.68 | 98.90 | 98.90 | 98.78 | 97.59 |
EPOCH_(1100) | ||||||
BT | 100.00 | 98.67 | 100.00 | 98.90 | 99.33 | 98.78 |
MT | 98.90 | 100.00 | 98.90 | 100.00 | 99.45 | 98.78 |
Average | 99.45 | 99.33 | 99.45 | 99.45 | 99.39 | 98.78 |
EPOCH_(1200) | ||||||
BT | 97.30 | 100.00 | 97.30 | 100.00 | 98.63 | 97.57 |
MT | 100.00 | 97.85 | 100.00 | 97.30 | 98.91 | 97.57 |
Average | 98.65 | 98.92 | 98.65 | 98.65 | 98.77 | 97.57 |
Method | Accuy | Sensy | Specy |
MDCNN-C3HI | 99.45 | 99.45 | 99.45 |
ResNet-18 | 92.09 | 97.02 | 84.82 |
SC-CNN | 81.93 | 92.02 | 93.81 |
CP-CNN | 87.07 | 96.85 | 84.07 |
AAI-CCDC | 90.52 | 93.91 | 93.07 |
VGG-16 | 81.82 | 85.11 | 89.26 |
Inception | 84.46 | 91.77 | 93.92 |
Classes | Image Count |
BT | 74 |
MT | 91 |
Total Number of Images | 165 |
Class | Accuracy | Precision | Sensitivity | Specificity | F-Score | MCC |
EPOCH_(100) | ||||||
BT | 94.59 | 97.22 | 94.59 | 97.80 | 95.89 | 92.66 |
MT | 97.80 | 95.70 | 97.80 | 94.59 | 96.74 | 92.66 |
Average | 96.20 | 96.46 | 96.20 | 96.20 | 96.31 | 92.66 |
EPOCH_(200) | ||||||
BT | 97.30 | 100.00 | 97.30 | 100.00 | 98.63 | 97.57 |
MT | 100.00 | 97.85 | 100.00 | 97.30 | 98.91 | 97.57 |
Average | 98.65 | 98.92 | 98.65 | 98.65 | 98.77 | 97.57 |
Class | Accuracy | Precision | Sensitivity | Specificity | F-Score | MCC |
EPOCH_(300) | ||||||
BT | 94.59 | 100.00 | 94.59 | 100.00 | 97.22 | 95.19 |
MT | 100.00 | 95.79 | 100.00 | 94.59 | 97.85 | 95.19 |
Average | 97.30 | 97.89 | 97.30 | 97.30 | 97.54 | 95.19 |
EPOCH_(400) | ||||||
BT | 100.00 | 98.67 | 100.00 | 98.90 | 99.33 | 98.78 |
MT | 98.90 | 100.00 | 98.90 | 100.00 | 99.45 | 98.78 |
Average | 99.45 | 99.33 | 99.45 | 99.45 | 99.39 | 98.78 |
EPOCH_(500) | ||||||
BT | 95.95 | 94.67 | 95.95 | 95.60 | 95.30 | 91.44 |
MT | 95.60 | 96.67 | 95.60 | 95.95 | 96.13 | 91.44 |
Average | 95.78 | 95.67 | 95.78 | 95.78 | 95.72 | 91.44 |
EPOCH_(600) | ||||||
BT | 100.00 | 96.10 | 100.00 | 96.70 | 98.01 | 96.40 |
MT | 96.70 | 100.00 | 96.70 | 100.00 | 98.32 | 96.40 |
Average | 98.35 | 98.05 | 98.35 | 98.35 | 98.17 | 96.40 |
EPOCH_(700) | ||||||
BT | 97.30 | 98.63 | 97.30 | 98.90 | 97.96 | 96.33 |
MT | 98.90 | 97.83 | 98.90 | 97.30 | 98.36 | 96.33 |
Average | 98.10 | 98.23 | 98.10 | 98.10 | 98.16 | 96.33 |
EPOCH_(800) | ||||||
BT | 94.59 | 97.22 | 94.59 | 97.80 | 95.89 | 92.66 |
MT | 97.80 | 95.70 | 97.80 | 94.59 | 96.74 | 92.66 |
Average | 96.20 | 96.46 | 96.20 | 96.20 | 96.31 | 92.66 |
EPOCH_(900) | ||||||
BT | 100.00 | 98.67 | 100.00 | 98.90 | 99.33 | 98.78 |
MT | 98.90 | 100.00 | 98.90 | 100.00 | 99.45 | 98.78 |
Average | 99.45 | 99.33 | 99.45 | 99.45 | 99.39 | 98.78 |
EPOCH_(1000) | ||||||
BT | 100.00 | 97.37 | 100.00 | 97.80 | 98.67 | 97.59 |
MT | 97.80 | 100.00 | 97.80 | 100.00 | 98.89 | 97.59 |
Average | 98.90 | 98.68 | 98.90 | 98.90 | 98.78 | 97.59 |
EPOCH_(1100) | ||||||
BT | 100.00 | 98.67 | 100.00 | 98.90 | 99.33 | 98.78 |
MT | 98.90 | 100.00 | 98.90 | 100.00 | 99.45 | 98.78 |
Average | 99.45 | 99.33 | 99.45 | 99.45 | 99.39 | 98.78 |
EPOCH_(1200) | ||||||
BT | 97.30 | 100.00 | 97.30 | 100.00 | 98.63 | 97.57 |
MT | 100.00 | 97.85 | 100.00 | 97.30 | 98.91 | 97.57 |
Average | 98.65 | 98.92 | 98.65 | 98.65 | 98.77 | 97.57 |
Method | Accuy | Sensy | Specy |
MDCNN-C3HI | 99.45 | 99.45 | 99.45 |
ResNet-18 | 92.09 | 97.02 | 84.82 |
SC-CNN | 81.93 | 92.02 | 93.81 |
CP-CNN | 87.07 | 96.85 | 84.07 |
AAI-CCDC | 90.52 | 93.91 | 93.07 |
VGG-16 | 81.82 | 85.11 | 89.26 |
Inception | 84.46 | 91.77 | 93.92 |