
The environment around us naturally represents number of its components in fractal structures. Some fractal patterns are also artificially simulated using real life mathematical systems. In this paper, we use the fractal operator combined to the fractional operator with both exponential and Mittag-leffler laws to analyze and solve generalized three-dimensional systems related to real life phenomena. Numerical solutions are provided in each case and applications to some related systems are given. Numerical simulations show the existence of the models' initial three-dimensional structure followed by its self- replication in fractal structure mathematically produced. The whole dynamics are also impacted by the fractional part of the operator as the derivative order changes.
Citation: Emile Franc Doungmo Goufo, Abdon Atangana. On three dimensional fractal dynamics with fractional inputs and applications[J]. AIMS Mathematics, 2022, 7(2): 1982-2000. doi: 10.3934/math.2022114
[1] | Feng Qiu, Hui Xu, Fukui Li . Applying modified golden jackal optimization to intrusion detection for Software-Defined Networking. Electronic Research Archive, 2024, 32(1): 418-444. doi: 10.3934/era.2024021 |
[2] | Fukui Li, Hui Xu, Feng Qiu . Modified artificial rabbits optimization combined with bottlenose dolphin optimizer in feature selection of network intrusion detection. Electronic Research Archive, 2024, 32(3): 1770-1800. doi: 10.3934/era.2024081 |
[3] | Mohd. Rehan Ghazi, N. S. Raghava . Securing cloud-enabled smart cities by detecting intrusion using spark-based stacking ensemble of machine learning algorithms. Electronic Research Archive, 2024, 32(2): 1268-1307. doi: 10.3934/era.2024060 |
[4] | Fukui Li, Hui Xu, Feng Qiu . Correction: Modified artificial rabbits optimization combined with bottlenose dolphin optimizer in feature selection of network intrusion detection. Electronic Research Archive, 2024, 32(7): 4515-4516. doi: 10.3934/era.2024204 |
[5] | Sahar Badri . HO-CER: Hybrid-optimization-based convolutional ensemble random forest for data security in healthcare applications using blockchain technology. Electronic Research Archive, 2023, 31(9): 5466-5484. doi: 10.3934/era.2023278 |
[6] | Kai Zheng, Zengshen Ye, Fanchao Wang, Xi Yang, Jianguo Wu . Custom software development of structural dimension optimization software for the buckling of stiffened plates. Electronic Research Archive, 2023, 31(2): 530-548. doi: 10.3934/era.2023026 |
[7] | Yejin Yang, Miao Ye, Qiuxiang Jiang, Peng Wen . A novel node selection method for wireless distributed edge storage based on SDN and a maldistributed decision model. Electronic Research Archive, 2024, 32(2): 1160-1190. doi: 10.3934/era.2024056 |
[8] | Sanqiang Yang, Zhenyu Yang, Leifeng Zhang, Yapeng Guo, Ju Wang, Jingyong Huang . Research on deformation prediction of deep foundation pit excavation based on GWO-ELM model. Electronic Research Archive, 2023, 31(9): 5685-5700. doi: 10.3934/era.2023288 |
[9] | Jianjun Huang, Xuhong Huang, Ronghao Kang, Zhihong Chen, Junhan Peng . Improved insulator location and defect detection method based on GhostNet and YOLOv5s networks. Electronic Research Archive, 2024, 32(9): 5249-5267. doi: 10.3934/era.2024242 |
[10] | Zhenzhong Xu, Xu Chen, Linchao Yang, Jiangtao Xu, Shenghan Zhou . Multi-modal adaptive feature extraction for early-stage weak fault diagnosis in bearings. Electronic Research Archive, 2024, 32(6): 4074-4095. doi: 10.3934/era.2024183 |
The environment around us naturally represents number of its components in fractal structures. Some fractal patterns are also artificially simulated using real life mathematical systems. In this paper, we use the fractal operator combined to the fractional operator with both exponential and Mittag-leffler laws to analyze and solve generalized three-dimensional systems related to real life phenomena. Numerical solutions are provided in each case and applications to some related systems are given. Numerical simulations show the existence of the models' initial three-dimensional structure followed by its self- replication in fractal structure mathematically produced. The whole dynamics are also impacted by the fractional part of the operator as the derivative order changes.
Cyber physical system (CPS) is commonly utilized to observe and secure a physical environment using a set of components namely control units, physical objects, actuators, and sensors [1]. As the result of dreadful consequences of a CPS failure, it is more important than anything else to protect a CPS from dangerous attacks. In this paper, we speak about the reliability problem of a CPS aimed at enduring hazardous attacks over a long time without energy replacement [2]. CPS often works in a rough environment where energy renewal is not possible, and nodes may be captured or compromised at periods. As a result, the intrusion detection system (IDS) is needed to detect harmful nodes without unnecessarily wasting energy to extend the network lifetime. IDS designed for CPSs has attracted significant interest [3]. IDS were used to identify security attacks and to monitor computer systems. Usually, IDS were of two major types: Signature-based and Behavior-based. Signature-based IDS deals with the comparison of the real-time performance of the computer system in contradiction of identified security attacks [4]. They cannot detect unknown attacks (signatures) as they depend on known attack models. This is notably important for CPS as they are operating independently for a longer time, and therefore it is difficult to interrupt the common upgrading or patching in the field [5].
Contrastingly, Behavior-based systems identify intrusions by observing a system's active implementation to identify suspect behavior and can identify both known and unknown attacks [6]. IDS, an initial layer is needed to fast assess, identify, and reply to dangerous cyber traffic. Network intrusion detection is vital for identifying and monitoring possible risks. Besides, there are key extreme data in public datasets for intrusion finding. In complex network infrastructure, managing a huge amount of data is another problem that these methods usually fail to solve [7]. For such reasons, classical IDSs based on predictable machine learning (ML) methods commonly have a few limitations, like poor real-time presentation and low regularization capability. In the past years, many researchers have been developed variations of IDS using deep learning (DL), ML, and other arithmetical approaches [8]. Over the years, DL methods have quickly been designed and it is largely used across various industries due to the continuous growth of computational capacity and much information. Both traditional and DL models were examined using familiar classification metrics. In multiple areas, involving image recognition and natural language processing (NLP), DL has created excellent outcomes [9]. Many researcher workers have used convolutional neural networks (CNN) effectively to find cyberattacks to raise the intelligence and correctness of network intrusion detection. The major cause of the failure is that network traffic is not in an image data format [10].
We introduce an Equilibrium Optimizer with Deep Recurrent Neural Networks Enabled Intrusion Detection (EODRNN-ID) technique in a Secure CPS environment. In the presented EODRNN-ID technique, a min-max normalization algorithm takes place to scale the input dataset. Besides, the EODRNN-ID technique involves an EO-based feature selection approach to choose the feature and diminish high dimensionality problem. For intrusion detection, the EODRNN-ID technique exploits the DRNN model. Finally, the hyperparameter related to the DRNN model can be tuned by the Chimp optimization algorithm (COA). The simulation study of the EODRNN-ID model is verified on a benchmark ID dataset.
Almuqren et al. [11] proposed an Explainable AI Enabled Intrusion Detection Approach for Secured CPSs (XAIID-SCPS). This developed method especially focuses on the classification and intrusion detection in the CPS. In this study, a Hybrid Enhanced GSO (HEGSO) method was employed for the FS. In the IDS, the Improved ENN (IENN) algorithm has been applied with the Enhanced Fruitfly Optimizer (EFFO) method for parameter optimization. Hilal et al. [12] presented an imbalanced GAN (IGAN) with optimum kernel ELM (OKELM), named the IGAN-OKELM method for IDS in the CPS platform. Furthermore, the OKELM framework was implemented as a classification and an optimum parameter tuning of KELM architecture was executed through the applications of the sandpiper optimization (SPO) method and thus, devises the effectiveness of IDS.
The authors [13] implemented FID-GAN, an innovative fog-based, unsupervised ID for CPSs employing GANs. The IDS was introduced for the fog model that makes computation efficiency nearer to the terminal nodes and aids in gathering low-latency necessities. Almutairi et al. [14] employed a Quantum Dwarf Mongoose Optimizer with an Ensemble DL Intrusion Detection (QDMO-EDLID) method in CPS. This algorithm is targeted for identifying the survival of intrusions by employing ensemble learning and FS methods. Furthermore, a Deep Autoencoder (DAE), ensemble of Convolution Residual Network (CRN), and Deep Belief Networks (DBNs) techniques have been implemented for classifying intrusion methods.
The authors [15] presented an Optimum DBN-based distributed IDS (ODBN-IDS) for secured CPS platforms. The Binary Flower Pollination Algorithm (BFPA) was utilized for the FS algorithm. The achieved features have been employed for optimally DBNs for detecting the occurrence of intrusion in cloud information and generating alarms if there is an existence of intrusions. In DBN architecture, the Equilibrium Optimizer Algorithm (EOA) could be employed for finetuning the hyperparameter. Xiao et al. [16] implemented the software-defined network (SDN) model into the CPS framework for easily managing CPS and providing a solution against network security issues. The authors also developed an identification method that depends on ELM to secure CPS.
Duhayyim et al. [17] designed an original Stochastic Fractal Search Algorithm with DL Driven IDS (SFSA-DLIDS) for cloud-based CPS platform. This introduced method mainly implements a min-max data normalization algorithm for converting an input dataset into well-suited formats. For decreasing a process of dimensionality, the SFSA method was implemented for choosing a feature subset. Moreover, a chicken swarm optimizer (CSO) with deep stacked-AE (DSAE) technique is exploited for discovering and organizing intrusion. Dutta et al. [18] developed a robust anomaly detection mechanism according to semi-supervised ML techniques authorizing the nearby real-time location. A deep neural network (DNN) framework could be employed for identifying anomalies – relying on regeneration errors.
In this work, a new EODRNN-ID method has been developed for cyberattack recognition in the CPS platform. The foremost goal of the EODRNN-ID model is to classify as well as identify of intrusive actions in the CPS platform. During the proposed EODRNN-ID method, four sets of operations are involved, namely data normalization, COA-based parameter tuning, DRNN-based classification, and EO-based feature subset selection. Figure 1 portrays the workflow of the EODRNN-ID procedure.
Normalization can be done by processing the variably extended data into a reliable range, thereby removing the dimension variation among logging data while preserving relationships amongst the datasets [19]. This method maps the dataset between 0 and 1:
x∗=x−xminxmax−xmin, | (1) |
where xmin denotes the minimum value, xmax signifies the maximum value of a certain feature in the dataset, x∗ refers to the normalized data, and x denotes an original data.
For electing the feature subsets, the EO model is used. The EO method is a new optimizer that draws on approximating equilibrium and dynamic states that establish control volume mass balance [20]. The particle with respective concentration plays the role of the searching agent. After generating a random population, the initial concentration of jth particles can be formulated as:
CnInitialj=Dmin+(Dmax−Dmin)×randj j=0,1,…,s. | (2) |
In Eq (2), randj is an arbitrarily produced value that lies in [0, 1], s refers to the number of particles, and Dmax and Dmin indicate the maximal and minimal values of the dimension.
The EO creates an equilibrium pool. Initially, the equilibrium candidate is defined (without their knowledge concerning the equilibrium state) to obtain a search pattern for the agent. This can be performed by the four better candidates (viz., large fitness value), along with other particles where the fitness equals the average of four different particles.
Ceq,pool=[Ceq,1,Ceq,2,Ceq,3,Ceq,4,Ceq,mean]. | (3) |
An exponential term (F) is used for the concentration updating:
F=e−β(t−t0), | (4) |
τ=(1−iTeriTermax)(μ×iTeritermax). | (5) |
In Eq (5), β indicates the random integer ranges between [0,1], μ shows the constant number to control the exploitation potential. As well, α is a constant number to control the exploration potential, T0 is evaluated using Eq (6) to ensure the convergence:
t0=1βln(−αsign(rand−O.5)[1−e−βT])+t. | (6) |
In the equation, sign (rand−O.5) is used to control the exploitation and exploration direction. Thus, Eq (4), is formulated by:
F=αsign(rand−O.5)(e−βt−1). | (7) |
A parameter used to enhance the exploitation is named generation rate (Gr) and is shown as follows:
Gr=Gr0e−n(t−t0), | (8) |
Gr0=GrP(Ceq−βC), | (9) |
PG={0.5rand1rand2>RP0otherwise. | (10) |
Let GrP and PG be the generation rate parameter and the likelihood generation, correspondingly.
Eq (11) is used as an updating rule (W is defined as a unit).
C=Ceq+(C−Ceq).F+GrβW(1−F). | (11) |
In the EO, the FF is used to balance the classifier outcome (higher) attained and the amount of features elected in the solution (lower), The fitness function to evaluate solutions is given in Eq (12).
Fitness=αγR(D)+β|R||C|. | (12) |
Now, α and β are parameters respective to the significance of classification quality and subset length. ∈ [1, 0] and β=1−α.|R|implies the cardinality of the nominated sub-set. γR(D) implies an error rate of classification. |C| shows the overall feature count in the database.
The DRNN model is used for the intrusion detection process. RNN is a special type of dense connection NN which is diametrically opposed to the typical FFNN for the introduction of "time" [21]. Especially, the output of the latent layer in the RNN is fed into the input, since the input is a composite of the recent and the present past. RNN exploits the peculiar structure to determine the relationship between events divided by the temporal instant. This means a kind of long‐term dependency since a specific event is often a function of a past event. Figure 2 illustrates the framework of RNN.
The RNN has a problem that degrades the performance. During the learning, the gradient tends to vanish similar to other NN models. In this network, the gradient expresses the changes in each weight regarding the change in error. Moreover, the computation gradient passes over several phases of multiplication, and when the quantity multiplied is lesser than one, then the gradient becomes smaller (vanishing), if the quantity multiplied is slightly greater than one, then the gradient becomes larger (exploding). We could not adjust the weight and train the network without accurate knowledge of the gradient. A variant of the RNN is the optimum solution for the gradient vanishing problems that exploit the LSTM unit. LSTM unit helps to retain the error information that can be backpropagated by the time and layers. LSTM unit can able to learn long‐term dependency problems on the gradient. LSTM presents a new structure named a memory cell that encompasses of four major components of gate such as a forget, an input, an output and a neuron with a self‐recurrent link. The LSTM one has three gating models as new elements in relation to the typical RNN.
The proposed architecture encompassed two recurrent layers with LSTM cells along with the softmax activation function and dense layer for the last classification. The entire network was trained by reducing the categorical cross‐entropy as a loss function:
L(y,ˆy)=−∑Ni=1yilogˆyi. | (13) |
In Eq (13), y and ˆy denote the target and the predicted classes correspondingly.
The Adam algorithm is the chosen optimizer, a gradient‐based optimization technique that uses first and second-order moments to attain a fast and smooth convergence.
The COA is used for the optimal parameters tuning of the DRNN. Unlike the other social predators, COA is based on the sexual motivation and intelligence of chimps during group hunting [22]. The four dissimilar stages of hunting in COA are pushing, chasing, blocking, and assaulting. Initially, Chimpanzees are generated randomly to start the COA. The mathematical model of COA's hunting is given below:
pt+1chimp=ptprey−κ⋅|J⋅ptprey−ζ⋅ptchimp|, | (14) |
κ=2⋅β⋅r1−β, | (15) |
J=2⋅(r2), | (16) |
ζ=according to chaotic maps, | (17) |
where t indicates the iteration number, κ,J, and ζ represent the coefficient vector, pprey shows the optimum solution attained r, and pchimp denotes the optimum location of the chimp. ζ refers to the chaotic mapping vector. Furthermore, β is a nonlinearly dropped constant value that ranges from 2.5 to 0, r1 and r2 are randomly generated values within [0, 1]. Note that the reference gives a comprehensive analysis of these mappings and coefficients.
The most effective and primary strategy for statistically duplicating the chimpanzee behavior is using prey assumed as the initial position of the target. The COA is accountable for housing four of the topmost chimpanzees. Consequently, based on the selected position of best chimpanzees, other individuals would be compelled to relocate as follows:
pt+1=14×(p1+p2+p3+p4), | (18) |
where
p1=pA−a1⋅|c1pA−m1x|, |
p2=pB−a2⋅|c2pB−m2x|, |
p3=pC−a3⋅|c3pc−m3P|, |
p4=pD−a4⋅|c4pD−m4P|. | (19) |
Moreover, chaotic value mimics social motivation activity in classical COA, as follows:
pt+1={ζηm≥12Eq(5)ηm<12. | (20) |
Where, ηm refers to a stochastic value within [0, 1]; however, this may result in a moderate or premature convergence.
The COA method derives an FF in order to get high efficacy of classification. It defines an optimistic integer to signify the optimal outcome of the solution candidate. The decay of classification error rate is expected as FF.
fitness(xi)=ClassifierErrorRate(xi)=No.of misclassified samplesTotal no.of samples∗100. | (21) |
The intrusion detection outcomes of EODRNN-ID technique are verified on 2 benchmark databases: NSLKDD2015 and CICIDS2017 datasets as defined in Table 1.
Classes | No. of Instances | |
NSLKDD2015 | CICIDS2017 | |
Normal | 67343 | 50000 |
Anomaly | 58630 | 50000 |
Total No. of Instances | 125973 | 100000 |
The confusion matrices achieved by the EODRNN-ID approach on the NSLKDD2015 dataset is shown in Figure 3. The results show an effective recognition of the normal as well as anomaly samples under all classes.
The recognition outcome of the EODRNN-ID methodology can be inspected on the NSLKDD2015 dataset is given in Table 2 and Figure 4. The outcome implies the effectual recognition of the normal as well as anomaly samples by the EODRNN-ID technique. With 80% of the TRAP, the EODRNN-ID method obtains average accuy, precn, recal, Fscore, and AUCscore values of 97.44%, 97.45%, 97.44%, 97.45%, and 97.44%, respectively. Moreover, with 20% of the TESP, the EODRNN-ID method obtains average accuy, precn, recal, Fscore, and AUCscore values of 97.39%, 97.41%, 97.39%, 97.40%, and 97.39%, respectively.
NSLKDD2015 Database | |||||
Classes | Accuy | Precn | Recal | F1score | AUCscore |
80% of TRAP | |||||
Normal | 97.66 | 97.58 | 97.66 | 97.62 | 97.44 |
Anomaly | 97.23 | 97.32 | 97.23 | 97.27 | 97.44 |
Average | 97.44 | 97.45 | 97.44 | 97.45 | 97.44 |
20% of TESP | |||||
Normal | 97.73 | 97.47 | 97.73 | 97.60 | 97.39 |
Anomaly | 97.04 | 97.34 | 97.04 | 97.19 | 97.39 |
Average | 97.39 | 97.41 | 97.39 | 97.40 | 97.39 |
70% of TRAP | |||||
Normal | 99.06 | 98.09 | 99.06 | 98.57 | 98.43 |
Anomaly | 97.80 | 98.91 | 97.80 | 98.35 | 98.43 |
Average | 98.43 | 98.50 | 98.43 | 98.46 | 98.43 |
30% of TESP | |||||
Normal | 98.99 | 98.19 | 98.99 | 98.59 | 98.44 |
Anomaly | 97.89 | 98.82 | 97.89 | 98.35 | 98.44 |
Average | 98.44 | 98.51 | 98.44 | 98.47 | 98.44 |
To evaluate the performance of EODRNN-ID method on the NSLKDD2015 dataset, TRA and TES accuy curves are defined, as presented in Figure 5. The TRA and TES accuy curves display the performance of EODRNN-ID method over some epochs. The outcome shows significant facts about learning task and generalization capacities of EODRNN-ID model. It is practical that the TRA and TES accuy curves get improved with an increased epoch count. It is well-known that the EODRNN-ID technique attains superior testing results, which has high ability in detecting the pattern in TRA and TES data.
Figure 6 demonstrates the complete TRA and TES loss performances of the EODRNN-ID model on the NSLKDD2015 dataset over epochs. The TRA loss shows the model loss acquires reduced over epochs. Primarily, the loss value gets minimized as the model adapts the load to reduce the prediction error on TRA and TES datasets. The loss curves illustrate the level to which the model fit the TRA dataset. The TRA and TES loss is gradually reduced and showed that the EODRNN-ID technique efficiently learns the pattern revealed in the TRA and TES data. Also, the EODRNN-ID method adjusts the parameters to reduce the divergence among the forecast as well as original TRA labels.
The detection performance of the EODRNN-ID method can be inspected on the CICIDS2017 dataset as delivered in Table 3 and Figure 7. The outcomes indicate the effective detection of the normal and anomaly samples by the EODRNN-ID methodology. With 80% of the TRAP, the EODRNN-ID method accomplished average accuy, precn, recal, Fscore, and AUCscore values of 99.20%, 99.20%, 99.20%, 99.20%, and 99.20%, respectively. Besides, with 20% of the TESP, the EODRNN-ID model obtains average accuy, precn, recal, Fscore, and AUCscore values of 99.28%, 99.27%, 99.28%, 99.27%, and 99.28%, correspondingly.
CICIDS 2017 Dataset | |||||
Classes | Accuy | Precn | Recal | F1score | AUCscore |
80% of TRAP | |||||
Normal | 99.05 | 99.35 | 99.05 | 99.20 | 99.20 |
Anomaly | 99.36 | 99.05 | 99.36 | 99.20 | 99.20 |
Average | 99.20 | 99.20 | 99.20 | 99.20 | 99.20 |
20% of TESP | |||||
Normal | 99.15 | 99.40 | 99.15 | 99.28 | 99.28 |
Anomaly | 99.40 | 99.15 | 99.40 | 99.27 | 99.28 |
Average | 99.28 | 99.27 | 99.28 | 99.27 | 99.28 |
70% of TRAP | |||||
Normal | 98.20 | 98.25 | 98.20 | 98.23 | 98.23 |
Anomaly | 98.26 | 98.20 | 98.26 | 98.23 | 98.23 |
Average | 98.23 | 98.23 | 98.23 | 98.23 | 98.23 |
30% of TESP | |||||
Normal | 98.28 | 98.24 | 98.28 | 98.26 | 98.26 |
Anomaly | 98.23 | 98.28 | 98.23 | 98.25 | 98.26 |
Average | 98.26 | 98.26 | 98.26 | 98.26 | 98.26 |
To estimate the performance of the EODRNN-ID methodology on the dataset of CICIDS2017, TRA and TS accuy curves are well-defined, as given in Figure 8. The TRA and TES accuy curves illustrate the performance of EODRNN-ID model over numerous epochs. The figure delivers significant details concerning the learning task and generalization capabilities of EODRNN-ID technique. It is perceived that the TRA and TES accuy curves get improved with an increased epoch count. The EODRNN-ID methodology attains improved testing accurateness which has the skill to detect the pattern in TRA and TES datasets.
Figure 9 shows the complete TRA and TES loss performances of the EODRNN-ID model on the CICIDS2017 dataset over epochs. The TRA loss demonstrates the model loss gets reduced over epochs. Primarily, the loss value gets minimized as the model adapts the weight to diminish the predictive error on the TRA and TES datasets. The loss curves illustrate the range to which the model fits the TRA dataset. It is perceived that the TRA and TES loss is steadily reduced and depicted that the EODRNN-ID technique learns the pattern presented in the TRA and TES datasets. Also, the EODRNN-ID technique adjusts the parameter to minimize the dissimilarity among the prediction and original TRA label.
The PR curve of the EODRNN-ID method on the CICIDS2017 dataset is established by plotting precision against recall as defined in Figure 10. The outcomes confirm that the EODRNN-ID technique achieves enlarged precision-recall values below all classes. The figure shows that the method learns to identify many classes. The EODRNN-ID method reaches upgraded outcomes in the detection of positive samples through a least false positives.
The ROC curves delivered by the EODRNN-ID methodology on the CICIDS2017 dataset are demonstrated in Figure 11, which has the ability to distinguish the classes. The figure shows valuable insights into the tradeoff amongst the TPR and FPR rates over different detection thresholds and variable numbers of epochs. It projects an accurate predictive performance of the EODRNN-ID model on the detection of different classes.
The intrusion detection results of the EODRNN-ID system are compared with the present methods in Table 4 and Figure 12 [11]. The outcomes established that the EODRNN-ID technique reaches improved performance over other models. It is stated that the LIB-SVM, WISARD, and Forest-PA models have resulted in worse results whereas the XAIID-SCPS, FURIA, AE-RF, and GSAE models have tried to accomplish manageable performance. However, the EODRNN-ID technique resulted in better performance with maximum accuy, precn, recal, and F1score of 99.28%, 99.27%, 99.28%, and 99.27% respectively. Thus, the EODRNN-ID technique can be applied to achieve security from the CPS platform.
Methods | Accuy | Precn | Recal | F1score |
EODRNN-ID | 99.28 | 99.27 | 99.28 | 99.27 |
XAIID-SCPS | 98.87 | 98.95 | 98.87 | 98.91 |
FURIA Model | 98.14 | 97.57 | 96.93 | 98.26 |
AE-RF Model | 97.62 | 97.35 | 97.79 | 97.30 |
Forest-PA | 96.72 | 96.97 | 97.32 | 98.13 |
WISARD | 96.64 | 97.58 | 97.29 | 98.65 |
GSAE Model | 97.63 | 95.97 | 98.39 | 98.19 |
LIB-SVM | 96.57 | 96.96 | 96.83 | 97.92 |
In this manuscript, we have established the EODRNN-ID methodology for cyberattack recognition in the CPS platform. The major intention of the EODRNN-ID algorithm is to classify and detect the intrusive actions from the CPS platforms. In the proposed EODRNN-ID method, four sets of operations are included such as data normalization, COA-based parameter tuning, DRNN-based classification, and EO-based feature subset selection. The simulation study of EODRNN-ID method can be verified on a benchmark data. Extensive outcomes illustrate the significant performance of the EODRNN-ID method over existing techniques
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
All authors declare no conflicts of interest in this paper.
[1] |
A. M. Reynolds, C. J. Rhodes, The lévy flight paradigm: Random search patterns and mechanisms, Ecology, 90 (2009), 877–887. doi: 10.1890/08-0153.1. doi: 10.1890/08-0153.1
![]() |
[2] |
T. Kim, S. Kim, Singularity spectra of fractional brownian motions as a multi-fractal, Chaos, Soliton. Fract., 19 (2004), 613–619. doi: 10.1016/S0960-0779(03)00187-5. doi: 10.1016/S0960-0779(03)00187-5
![]() |
[3] |
M. Mignotte, A fractal projection and markovian segmentation-based approach for multimodal change detection, IEEE T. Geosci. Remote, 58 (2020), 8046–8058. doi: 10.1109/TGRS.2020.2986239. doi: 10.1109/TGRS.2020.2986239
![]() |
[4] |
M. O. Cáceres, Non-markovian processes with long-range correlations: Fractal dimension analysis, Braz. J. phys., 29 (1999), 125–135. doi: 10.1590/S0103-97331999000100011. doi: 10.1590/S0103-97331999000100011
![]() |
[5] |
A. Atangana, J. Nieto, Numerical solution for the model of RLC circuit via the fractional derivative without singular kernel, Adv. Mech. Eng., 7 (2015), 1–7. doi: 10.1177/1687814015613758. doi: 10.1177/1687814015613758
![]() |
[6] | D. Brockmann, L. Hufnagel, Front propagation in reaction-superdiffusion dynamics: Taming Lévy flights with fluctuations, Phys. Rev. Lett. 98 (2007), 178–301. doi: 10.1103/PhysRevLett.98.178301. |
[7] |
E. F. D. Goufo, S. Kumar, S. Mugisha, Similarities in a fifth-order evolution equation with and with no singular kernel, Chaos, Soliton. Fract., 130 (2020), 109467. doi: 10.1016/j.chaos.2019.109467. doi: 10.1016/j.chaos.2019.109467
![]() |
[8] |
W. Wang, M. A. Khan, Analysis and numerical simulation of fractional model of bank data with fractal–fractional atangana–baleanu derivative, J. Comput. Appl. Math., 369 (2020), 112646. doi: 10.1016/j.cam.2019.112646. doi: 10.1016/j.cam.2019.112646
![]() |
[9] |
S. Das, Convergence of Riemann-Liouvelli and Caputo Derivative Definitions for Practical Solution of Fractional Order Differential Equation, Int. J. Appl. Math. Stat., 23 (2011), 64–74. doi: 10.1416/i.ijams.2011.03.017. doi: 10.1416/i.ijams.2011.03.017
![]() |
[10] |
A. Atangana, T. Mekkaoui, Trinition the complex number with two imaginary parts: Fractal, chaos and fractional calculus, Chaos, Soliton. Fract., 128 (2019), 366–381. doi: 10.1016/j.chaos.2019.08.018. doi: 10.1016/j.chaos.2019.08.018
![]() |
[11] | E. F. D. Goufo, Fractal and fractional dynamics for a 3d autonomous and two-wing smooth chaotic system, Alexandria Engineering Journal, (2020). doi: 10.1016/j.aej.2020.03.011. |
[12] |
E. F. D. Goufo, Application of the caputo-fabrizio fractional derivative without singular kernel to korteweg-de vries-burgers equation, Math, Model, Anal., 21 (2016), 188–198. doi: 10.3846/13926292.2016.1145607. doi: 10.3846/13926292.2016.1145607
![]() |
[13] |
A. Atangana, Fractal-fractional differentiation and integration: Connecting fractal calculus and fractional calculus to predict complex system, Chaos, Soliton. Fract., 102 (2017), 396–406. doi: 10.1016/j.chaos.2017.04.027. doi: 10.1016/j.chaos.2017.04.027
![]() |
[14] |
S. İ. ARAZ, Numerical analysis of a new volterra integro-differential equation involving fractal-fractional operators, Chaos, Soliton. Fract., 130 (2020), 109396. doi: 10.1016/j.chaos.2019.109396. doi: 10.1016/j.chaos.2019.109396
![]() |
[15] |
E. F. Doungmo Goufo, The proto-lorenz system in its chaotic fractional and fractal structure, Int. J. Bifurcat. Chaos, 30 (2020), 2050180. doi: 10.1142/S0218127420501801. doi: 10.1142/S0218127420501801
![]() |
[16] | M. V. Berry, S. Klein, Integer, fractional and fractal talbot effects, J. Mod. Optic. 43 (1996), 2139–2164. doi: 10.1080/09500349608232876. |
[17] | A. A. A. Kilbas, H. M. Srivastava, J. J. Trujillo, Theory and Applications of Fractional Differential Equations, (Elsevier Science Limited, 2006). ISBN: 9780444518323 0444518320 0080462073 9780080462073. |
[18] | S. Pooseh, H. S. Rodrigues, D. F. Torres, Fractional derivatives in dengue epidemics, In: AIP Conference Proceedings, 1389(1), AIP-2011,739–742. https://arXiv.org/pdf/1108.1683.pdf. |
[19] |
W. Macek, R. Branco, M. Korpyś, T. Łagoda, Fractal dimension for bending–torsion fatigue fracture characterisation, Measurement, 184 (2021), 109910. doi: 10.1016/j.measurement.2021.109910. doi: 10.1016/j.measurement.2021.109910
![]() |
[20] | L. R. Carney, J. J. Mecholsky Jr, Relationship between fracture toughness and fracture surface fractal dimension in aisi 4340 steel (2013). doi: 10.4236/msa.2013.44032. |
[21] |
A. Atangana, S. I. Araz, Atangana-seda numerical scheme for labyrinth attractor with new differ, Geophys. J. Int., 13 (2020), 529–539. doi: 10.1142/S0218348X20400447. doi: 10.1142/S0218348X20400447
![]() |
[22] |
K. Diethelm, N. J. Ford, A. D. Freed, A predictor-corrector approach for the numerical solution of fractional differential equations, Nonlinear Dynam., 29 (2002), 3–22. doi: 10.1023/A:1016592219341. doi: 10.1023/A:1016592219341
![]() |
Classes | No. of Instances | |
NSLKDD2015 | CICIDS2017 | |
Normal | 67343 | 50000 |
Anomaly | 58630 | 50000 |
Total No. of Instances | 125973 | 100000 |
NSLKDD2015 Database | |||||
Classes | Accuy | Precn | Recal | F1score | AUCscore |
80% of TRAP | |||||
Normal | 97.66 | 97.58 | 97.66 | 97.62 | 97.44 |
Anomaly | 97.23 | 97.32 | 97.23 | 97.27 | 97.44 |
Average | 97.44 | 97.45 | 97.44 | 97.45 | 97.44 |
20% of TESP | |||||
Normal | 97.73 | 97.47 | 97.73 | 97.60 | 97.39 |
Anomaly | 97.04 | 97.34 | 97.04 | 97.19 | 97.39 |
Average | 97.39 | 97.41 | 97.39 | 97.40 | 97.39 |
70% of TRAP | |||||
Normal | 99.06 | 98.09 | 99.06 | 98.57 | 98.43 |
Anomaly | 97.80 | 98.91 | 97.80 | 98.35 | 98.43 |
Average | 98.43 | 98.50 | 98.43 | 98.46 | 98.43 |
30% of TESP | |||||
Normal | 98.99 | 98.19 | 98.99 | 98.59 | 98.44 |
Anomaly | 97.89 | 98.82 | 97.89 | 98.35 | 98.44 |
Average | 98.44 | 98.51 | 98.44 | 98.47 | 98.44 |
CICIDS 2017 Dataset | |||||
Classes | Accuy | Precn | Recal | F1score | AUCscore |
80% of TRAP | |||||
Normal | 99.05 | 99.35 | 99.05 | 99.20 | 99.20 |
Anomaly | 99.36 | 99.05 | 99.36 | 99.20 | 99.20 |
Average | 99.20 | 99.20 | 99.20 | 99.20 | 99.20 |
20% of TESP | |||||
Normal | 99.15 | 99.40 | 99.15 | 99.28 | 99.28 |
Anomaly | 99.40 | 99.15 | 99.40 | 99.27 | 99.28 |
Average | 99.28 | 99.27 | 99.28 | 99.27 | 99.28 |
70% of TRAP | |||||
Normal | 98.20 | 98.25 | 98.20 | 98.23 | 98.23 |
Anomaly | 98.26 | 98.20 | 98.26 | 98.23 | 98.23 |
Average | 98.23 | 98.23 | 98.23 | 98.23 | 98.23 |
30% of TESP | |||||
Normal | 98.28 | 98.24 | 98.28 | 98.26 | 98.26 |
Anomaly | 98.23 | 98.28 | 98.23 | 98.25 | 98.26 |
Average | 98.26 | 98.26 | 98.26 | 98.26 | 98.26 |
Methods | Accuy | Precn | Recal | F1score |
EODRNN-ID | 99.28 | 99.27 | 99.28 | 99.27 |
XAIID-SCPS | 98.87 | 98.95 | 98.87 | 98.91 |
FURIA Model | 98.14 | 97.57 | 96.93 | 98.26 |
AE-RF Model | 97.62 | 97.35 | 97.79 | 97.30 |
Forest-PA | 96.72 | 96.97 | 97.32 | 98.13 |
WISARD | 96.64 | 97.58 | 97.29 | 98.65 |
GSAE Model | 97.63 | 95.97 | 98.39 | 98.19 |
LIB-SVM | 96.57 | 96.96 | 96.83 | 97.92 |
Classes | No. of Instances | |
NSLKDD2015 | CICIDS2017 | |
Normal | 67343 | 50000 |
Anomaly | 58630 | 50000 |
Total No. of Instances | 125973 | 100000 |
NSLKDD2015 Database | |||||
Classes | Accuy | Precn | Recal | F1score | AUCscore |
80% of TRAP | |||||
Normal | 97.66 | 97.58 | 97.66 | 97.62 | 97.44 |
Anomaly | 97.23 | 97.32 | 97.23 | 97.27 | 97.44 |
Average | 97.44 | 97.45 | 97.44 | 97.45 | 97.44 |
20% of TESP | |||||
Normal | 97.73 | 97.47 | 97.73 | 97.60 | 97.39 |
Anomaly | 97.04 | 97.34 | 97.04 | 97.19 | 97.39 |
Average | 97.39 | 97.41 | 97.39 | 97.40 | 97.39 |
70% of TRAP | |||||
Normal | 99.06 | 98.09 | 99.06 | 98.57 | 98.43 |
Anomaly | 97.80 | 98.91 | 97.80 | 98.35 | 98.43 |
Average | 98.43 | 98.50 | 98.43 | 98.46 | 98.43 |
30% of TESP | |||||
Normal | 98.99 | 98.19 | 98.99 | 98.59 | 98.44 |
Anomaly | 97.89 | 98.82 | 97.89 | 98.35 | 98.44 |
Average | 98.44 | 98.51 | 98.44 | 98.47 | 98.44 |
CICIDS 2017 Dataset | |||||
Classes | Accuy | Precn | Recal | F1score | AUCscore |
80% of TRAP | |||||
Normal | 99.05 | 99.35 | 99.05 | 99.20 | 99.20 |
Anomaly | 99.36 | 99.05 | 99.36 | 99.20 | 99.20 |
Average | 99.20 | 99.20 | 99.20 | 99.20 | 99.20 |
20% of TESP | |||||
Normal | 99.15 | 99.40 | 99.15 | 99.28 | 99.28 |
Anomaly | 99.40 | 99.15 | 99.40 | 99.27 | 99.28 |
Average | 99.28 | 99.27 | 99.28 | 99.27 | 99.28 |
70% of TRAP | |||||
Normal | 98.20 | 98.25 | 98.20 | 98.23 | 98.23 |
Anomaly | 98.26 | 98.20 | 98.26 | 98.23 | 98.23 |
Average | 98.23 | 98.23 | 98.23 | 98.23 | 98.23 |
30% of TESP | |||||
Normal | 98.28 | 98.24 | 98.28 | 98.26 | 98.26 |
Anomaly | 98.23 | 98.28 | 98.23 | 98.25 | 98.26 |
Average | 98.26 | 98.26 | 98.26 | 98.26 | 98.26 |
Methods | Accuy | Precn | Recal | F1score |
EODRNN-ID | 99.28 | 99.27 | 99.28 | 99.27 |
XAIID-SCPS | 98.87 | 98.95 | 98.87 | 98.91 |
FURIA Model | 98.14 | 97.57 | 96.93 | 98.26 |
AE-RF Model | 97.62 | 97.35 | 97.79 | 97.30 |
Forest-PA | 96.72 | 96.97 | 97.32 | 98.13 |
WISARD | 96.64 | 97.58 | 97.29 | 98.65 |
GSAE Model | 97.63 | 95.97 | 98.39 | 98.19 |
LIB-SVM | 96.57 | 96.96 | 96.83 | 97.92 |