
The refuge effect is critical in ecosystems for stabilizing predator-prey interactions. The purpose of this research was to investigate the complexities of a discrete-time predator-prey system with a refuge effect. The analysis investigated the presence and stability of fixed points, as well as period-doubling and Neimark-Sacker (NS) bifurcations. The bifurcating and fluctuating behavior of the system was controlled via feedback and hybrid control methods. In addition, numerical simulations were performed as evidence to back up our theoretical findings. According to our findings, maintaining an optimal level of refuge availability was critical for predator and prey population cohabitation and stability.
Citation: Parvaiz Ahmad Naik, Muhammad Amer, Rizwan Ahmed, Sania Qureshi, Zhengxin Huang. Stability and bifurcation analysis of a discrete predator-prey system of Ricker type with refuge effect[J]. Mathematical Biosciences and Engineering, 2024, 21(3): 4554-4586. doi: 10.3934/mbe.2024201
[1] | Keying Du, Liuyang Fang, Jie Chen, Dongdong Chen, Hua Lai . CTFusion: CNN-transformer-based self-supervised learning for infrared and visible image fusion. Mathematical Biosciences and Engineering, 2024, 21(7): 6710-6730. doi: 10.3934/mbe.2024294 |
[2] | Yufeng Li, Chengcheng Liu, Weiping Zhao, Yufeng Huang . Multi-spectral remote sensing images feature coverage classification based on improved convolutional neural network. Mathematical Biosciences and Engineering, 2020, 17(5): 4443-4456. doi: 10.3934/mbe.2020245 |
[3] | Basem Assiri, Mohammad Alamgir Hossain . Face emotion recognition based on infrared thermal imagery by applying machine learning and parallelism. Mathematical Biosciences and Engineering, 2023, 20(1): 913-929. doi: 10.3934/mbe.2023042 |
[4] | Shuai Cao, Biao Song . Visual attentional-driven deep learning method for flower recognition. Mathematical Biosciences and Engineering, 2021, 18(3): 1981-1991. doi: 10.3934/mbe.2021103 |
[5] | Eric Ke Wang, Nie Zhe, Yueping Li, Zuodong Liang, Xun Zhang, Juntao Yu, Yunming Ye . A sparse deep learning model for privacy attack on remote sensing images. Mathematical Biosciences and Engineering, 2019, 16(3): 1300-1312. doi: 10.3934/mbe.2019063 |
[6] | Jun Gao, Qian Jiang, Bo Zhou, Daozheng Chen . Convolutional neural networks for computer-aided detection or diagnosis in medical image analysis: An overview. Mathematical Biosciences and Engineering, 2019, 16(6): 6536-6561. doi: 10.3934/mbe.2019326 |
[7] | Akansha Singh, Krishna Kant Singh, Michal Greguš, Ivan Izonin . CNGOD-An improved convolution neural network with grasshopper optimization for detection of COVID-19. Mathematical Biosciences and Engineering, 2022, 19(12): 12518-12531. doi: 10.3934/mbe.2022584 |
[8] | Danial Sharifrazi, Roohallah Alizadehsani, Javad Hassannataj Joloudari, Shahab S. Band, Sadiq Hussain, Zahra Alizadeh Sani, Fereshteh Hasanzadeh, Afshin Shoeibi, Abdollah Dehzangi, Mehdi Sookhak, Hamid Alinejad-Rokny . CNN-KCL: Automatic myocarditis diagnosis using convolutional neural network combined with k-means clustering. Mathematical Biosciences and Engineering, 2022, 19(3): 2381-2402. doi: 10.3934/mbe.2022110 |
[9] | Bakhtyar Ahmed Mohammed, Muzhir Shaban Al-Ani . An efficient approach to diagnose brain tumors through deep CNN. Mathematical Biosciences and Engineering, 2021, 18(1): 851-867. doi: 10.3934/mbe.2021045 |
[10] | Zijian Wang, Yaqin Zhu, Haibo Shi, Yanting Zhang, Cairong Yan . A 3D multiscale view convolutional neural network with attention for mental disease diagnosis on MRI images. Mathematical Biosciences and Engineering, 2021, 18(5): 6978-6994. doi: 10.3934/mbe.2021347 |
The refuge effect is critical in ecosystems for stabilizing predator-prey interactions. The purpose of this research was to investigate the complexities of a discrete-time predator-prey system with a refuge effect. The analysis investigated the presence and stability of fixed points, as well as period-doubling and Neimark-Sacker (NS) bifurcations. The bifurcating and fluctuating behavior of the system was controlled via feedback and hybrid control methods. In addition, numerical simulations were performed as evidence to back up our theoretical findings. According to our findings, maintaining an optimal level of refuge availability was critical for predator and prey population cohabitation and stability.
A fast diagnosis is an emergent issue in the present situation like-COVID-19 test [1,2]. It is a routine test of viruses following the R.T.–PCR method. However, this test is carried out sequentially. So there is a chance of a high FNR ratio. A test like this will take a longer time to complete. Besides that, there arises a shortage of R.T.–PCR test kits. So there is a severe need for alternative tactics to diagnose patients more quickly to manage these situations. The infrared image is self-sufficient for identifying these diseases by measuring temperature as a fast finding. C.T. scan and other pathological tests are essential in evaluating a patient with a suspected pandemic infection [3,4]. Moreover, for detection deep precognitive analysis is applied for the presentation of diseases detection during pandemic situations. In such situation biologically-Inspired convolution fuzzy network would be more effective [5]. However, a patient's radiological findings may be expected at first. In this paper, we tried to collect information by an infrared camera of a patient's eye-retina to measure temperature by using a smartphone with an advanced camera enriched with infrared features. Almost every mobile phone with an exemplary configuration has this feature. If not, an infrared-enabled application can be installed from the Google Play Store. Then it will be noted, and the next course of action will be taken. Instead of using a regular infrared camera, we want to incorporate a mobile camera with its features. So this technique ensures both applications (through mobile and infrared cameras).
There are numerous applications of infrared images for detecting humans and their body parts for visual surveillance, human-activity tracking [6], medical applications for eye-diseases detection [7], driver safety, homeland security, etc. An essential possibility is monitoring people by CCTV and surveillance camera-based systems. Infrared images are also related to authenticating human facial parts [8,9]. The infrared camera can capture the images in any lighting conditions, while an RGB camera frequently needs proper lighting to capture high-resolution-based clear images. Meanwhile, infrared imaging schemes use infrared light sources to produce a healthier image without natural or artificial light. The traditional face recognition systems depend on self-reporting of manual measurement schemes [10], but the infrared technique depends on an automated system [11]. The traditional system is weak in checking it continuously and quickly. Researchers looked into contactless sensors, infrared cameras, and infrared cameras to solve these issues. Since temperature is directly connected to the physical parts of a human face like the eyes, forehead, cheek, lips, etc., few researchers applied the physical parts to identify temperature. Lips, mouth, and eyebrows are intimately related to different temperature measurement schemes, so a projected model is addressed. In work [10] proposed a method to identify human faces by visual features like the blinking ratio of closed and open eyes. In additional work, researchers noticed by tracing 3D face images using mobile phones. In their recent study M.Kim, B. H. Kim, S. Jo [12] developed a contactless-based real-time scheme for recognizing a driver operating a car in a sleeping or drowsing mood.
The use of a digital camera is a normal phenomenon. However, due to various good characteristics, the infrared camera and its imaging techniques are used in various applications like pandemic disease identification. It is proposed in recent literature that the computational timing of an infrared imaging technique is less than that of an RGB imaging technique [13] to work by taking advantage of these changes. Using different imaging types, the researchers have shown their efficiency in quantitative measurement. For example, in work [14] researchers achieved 65% accuracy utilizing images, but researchers [15] achieved more than 91% accuracy using infrared images. It demonstrates that it is possible to achieve a greater accuracy rate in recognizing anything by employing infrared imaging techniques.
The following are the key contributions of our manuscript.
● We proposed an infrared imaging technique to identify the pandemic and make accurate and quicker measurements.
● Few parts of a face are considered for measuring temperature. So a segmentation technique is used in dividing a face into various classes.
● A novel searching technique will be incorporated to search the regions from left to right.
The paper is separated into sub-sections as follows:
Section 2 illustrates related works. The proposed tracking scheme is described in section 3 in detail. The image registration process with its sub-sections (Pre-processing, Segmentation by improved gradient method, Feature extraction, Hybrid adaptive optimized classifier system, Head pose estimation and correction, Feature weights extraction, Regions detection, Vector formation from the segmented region) has been demonstrated in section 4. The feature extraction technique is described in section 4. The findings of the experiment are presented in section 5. In the final step, a decision is reached, and then we discuss the path that our work will take in the future.
Mobile and other electrical gadgets are used for face tracking [16] from a short distance in many applications nowadays. However, it may be unsuitable in many situations. Many scholars took facial signals for security and safety to control some surveillance-related issues. A video-based human identification system might have offered a proficient nonintrusive explanation for our day-to-day use. A video-based imaging approach routinely uses two imaging methods, i.e., infrared and color. In work [17], projected an improved resolution-based infrared facial image database by extensive manual explanations and clarified the flexibility in various applications. They proposed a set of correlated algorithms for detection and approximation within a facial image. They advised that a multi-process description by networking middleware resolve be used. It might be used for real-time face authentication and recognition using infrared pictures. By evaluating the frontal view of facial regions, scholars [18] proposed how infrared video can be exploited with the help of the SIFT flow approach. Furthermore, eye-tracking using video and its effectiveness has been presented along with the efficiencies suggested [19]. Finally, the evaluation is finished using the face-temperature histogram value.
Surveillance systems that use network devices like cameras to monitor and track activities generate massive volumes of data [20]. Both the migration of data over bandwidth constraints and the accumulation of lags in network technologies are problems that need to be solved. It proposes constructing a decentralized facial recognition algorithm applying PCA and LBP [21] for a distributed surveillance system using wireless networks and cloud computing. In the regionalized face tracking approach, face recognition, feature extraction and matching [22] are done in two steps. First, face detection and feature extraction are done on a specified cloudlet adjacent to the security cameras, avoiding spending vast amounts of data on a distant processing center. Face matching, on the other hand, is done using the facial features vector in any private cloudlet environment. According to this study, the suggested approach works effectively in the Wireless Network-Cloud architecture of its usefulness in finding "lost" people.
In recent years, the essential part of human life has been security. The most significant consideration at this point is cost. This technology is quite advantageous in minimizing the cost of external movement monitoring. We provide a real-time recognition method in this study that would allow us to handle photographs fast [23]. The main objective of this manuscript is to recognize people so that the house and office can be secured. A PIR sensor is utilized to detect movement in a defined area. The Raspberry Pi will next capture the photographs. The face in the captured image will then be detected and recognized [24]. Finally, the photographs and notifications will be uploaded to a smartphone-based Wireless Network via the Telegram program. The proposed systems calculate in real-time, are rapid, and are low-cost. The experiments suggest that the proposed facial recognition system may be employed in real-time.
This work proposes a higher efficiency in face recognition approach based on characteristics that use a newly created process named Floor of Log (FoL). This method has the benefit of conserving space and energy while maintaining precision. The scholars [25] used K-Nearest Neighbours (KNN) and Support Vector Machine (SVM) [26] approach to discover the optimal factor of the FoL technique utilizing cross-validation. The correctness and size far ahead of compression of the suggested approach were assessed. In the Extended YaleB, AR, LFW and CelebA face datasets, the FoL produced better results than a technique with the equivalent classifiers [27] without compressed features, with 86% to 91% relative to the same data size. This study provides a robust and easy feature compression technique for FER applications for various parts recognition [28]. The FoL is a supervised compression approach that may be modified to get better results and is compatible with edge computing schemes.
The wireless network is a concept that integrates technology into our day-to-day activities by applying deep neural networks and convolution neural network (CNN) learning techniques [29,30]. In their FER system, the authors of work [31] consider introducing convolutional neural network (CNN) and lengthy as well as short-term memory (LSTM) techniques. On the other hand, researchers [32] implemented CNN using fewer data and the eight-folding approach, yielding improved results. One of the key categories in which skill assists us is safety and privacy. Smartphones may be used as a safety alert scheme because they are the most extensively used smart gadgets. Artificial intelligence (A.I.)-enabled intelligent wireless network devices have grown in popularity in recent years. This study developed an innovative network security solution for the smart home using the neuro-fuzzy optimized classifier [33]. A security system is built around a Raspberry Pi and a NoIR(no Infrared) Pi camera unit that records and captures images [34,35]. A PIR-MS (Passive infrared motion sensor) is also used to identify motion. We propose combining images and motion sensor records from the NoIR(no Infrared) Pi camera unit to identify a safety threat using our algorithm's facial recognition classification technique [36,37]. In the event of an emergency, the system can notify the user. The proposed system has a 95.5% accuracy and 91% precision in detecting any security threat.
The Visual Internet of Things (VIoTs) and wireless communication have gotten much interest in recent years because of their capacity to extract object position from scene picture information, attach an optical tag to the item, and then return scene object information to the wireless network. Face recognition is one of the ideal visual network methods since a person's face is an intrinsic label [38]. The researchers [39] developed a pose estimation technique to resolve the problem due to long-range pixels leading to poor performance in FER. However, due to a lack of processing resources, existing state-of-the-art facial acknowledgement methods based on huge deep artificial neural networks (ANN) [40] are challenging to implement in the implanted podium for the visual wireless network. To overcome this problem, we provide a small deep ANN-based facial recognition system for the VIoTs. The proposed technique employs deep neural networks with minimal complexity [41] to function in an embedded set.
Moreover, it can withstand changes in lighting and position. We exhibit comparable correctness and enactment results designed for the LFW authentication benchmark using the mobile facial acknowledgement dataset. Scholars [42] have demonstrated that a Facial Action Coding System (FACS) is used in the detection of the characterization of a human being and that this same system could be applied to the detection of the categorization of varied consumers' goods by employing the affective reactions of those selective consumers. Work [43] proved and implemented a method in real-time on an Android-built platform for panic-face detection [44], fatigue detection [45] by incorporating mutation, genetic algorithm [46,47] and many more in expression recognition. Moreover, scholars [48] have proved that a histogram approach becomes very appropriate in these predictions for quicker identification of depth measurement and various expression recognition. Alzheimer's disease is a progressive degenerative neurological illness. It is now incurable, and those who suffer from it are denied the freedom to leave their houses compared to the general population. This article aims to develop and form an IoT prototype that can identify Alzheimer's victimized people, thereby increasing the high worth of life and relaxing caregivers' jobs. The patient wears a small dorsal belt that contains a Node MCU ESP8266 board, a GPS module, and a small portable WiFi modem/router. The patient's location is tracked via a web application and an Android/iOS mobile application. This research also allows the Kalman Filter to track the patient's movement and estimate his position, especially when the patient wanders outside.
Pneumonia causes a high morbidity and mortality rate in infants. This sickness affects the lungs' tiny air sacs, necessitating rapid diagnosis and treatment. One of the most popular diagnostics for diagnosing pneumonia is a chest X-ray. This paper explains how to identify pneumonia in chest X-ray pictures using a real-time Wireless Network system. Three medical experts reviewed the information, including 6000 images of children's chest X-rays. This study adopted twelve alternative image Net-trained Convolutional Neural Networks (CNNs) [49,50] architectures as resource extractors. The CNN and deep neural network are very emergent if they are used in FER for disguised and distorted expression recognition [51]. Many prototype designs are proposed by scholars [52] based on deep CNN for identifying people's faces, iris, and fingers vein in our daily life. CNNs were then integrated with other learning approaches like KNN, Naive Bayes (N.B.), Random Forest (R.F.), Multilayer Perceptron (MLP) [53], and SVM. The best model for diagnosing pneumonia in these chest radiographs was the VGG19 architecture with the SVM classifier and RBF kernel. The accuracy precision scores for this combination were 96.47%, 96.46%, and 96.46%, respectively. Paralleled to other articles in the collected works, the proposed approach yielded better results for the measures used. These findings imply that using a real-time Wireless Network system to detect pneumonia in children is beneficial and could be utilized as a diagnostic tool in the future. Doctors will obtain faster and more precise findings using this technology, allowing them to provide the best treatment possible.
Moreover, many works use machine learning-based recognition approaches. Researchers employ machine learning models such as SVM, CNN, and ANN (Artificial Neural Network), Genetic Algorithm for facial identification. A CNN is a deep learning approach [54,55] with a high-performance level and can extract features from training data. It achieves features from side to side using several convolutional layers and is often validated by a series of totally related layers [30]. On the CK+ database, investigators [32] raised CNN accuracy to 96.76%. Improved pre-processing processes such as sample formation and intensity normalization have enriched accuracy. A BDBN with numerous facial expression classifications was employed in another investigation. The accuracy on the CK+ database was 96.7 %, whereas it was 91.8 % on the JAFFE database. On the other hand, the execution period lasts for eight days.
In In this study, the proposed technique saves processing time while improving recognition performance using wireless communication accompanied by a cloudlets server. The proposed method uses numerous pre-processing techniques in conjunction with CNN to achieve excellent accuracy. It is also recommended that A.R.s focus on critical data in categorization to forecast projected expressions. This also helps to cut down on processing time. In addition, parallelism [30] is frequently employed to improve speed and precision.
Finally, we conclude that techniques based on appearance can obviate the need for meticulously created visual pieces to characterize a gaze in the eye-tracking system. However, we will incur a substantial penalty in execution time and storage space if we apply the complete input image to a classifier to forecast the gaze. Moreover, training eye image data, including poses and locations, is required during the development phase.
Classifying distinct face portions is the primary problem while tracking a human face and extracting features from an infrared-facial image. Poses have a direct impact on a person's facial expression. The head movement and poses of a gaze vector are inextricably linked. We suggest a two- part flow diagram. We illustrated it by our experiment's results in Figure 2. First, we use all strategies to build Experiment-Ⅰ's face region feature vectors.
Here we have followed the processes of registering an image, Sequence Maintenance of its sequence, Central-point and facial regions detection. Then, using the mapping function obtained from getting processes, we examined the correlation among the class calibrations in Experiment Ⅱ. The system will proceed to Experiment Ⅱ after completing the calibration process. Otherwise, it returns to the beginning of Experiment Ⅰ. Finally, from the grouped regions, we proceeded to retrieve attributes.
Image registration is required to guarantee acceptable infrared image dataset series collected from a facial image. First, the image must be registered in the database if it has not already been done. As a result, the proposed technique can help detect picture duplication during image registration. During the image registration process, video images are converted into frames.
Due to noises in the images, traditional recognition methods have proved that obtaining correct portions of any part of a human face in unusual settings (low lighting, dark, rainy period, or natural calamity) is exceedingly challenging. We applied the notion of using histograms to avoid difficulties like this, which assures that any lighting effect does not cause too many complications. Normal and sensitive histograms can be used to implant 3D data. The noises can be removed here in this stage from the infrared image. It is common practice to use nonlinear optical filtering to remove noise from an image. Pre-handling is done to work on a different type of filtering used in the infrared image. In our application, we applied Adaptive Weiner Filter (AWF) [43] to remove Gaussian Noise and other noises accompanying the infrared images we have used in our application.
Algorithm 1. Pose estimation (PE) algorithm |
Pm=PE(P2D,P3D,fw,f) Input:P2D,P3D,fw,f X=Q[P2D];c=R[X] Y=P2Dx/f;X=P2Dy/f h=[P3D,c];o=Pinv(h) while(true) {j=oxY;k=oxX; Lz=1√(1jx1k) pm1=jxLz; pm2=kxLz; Rm1=pm1(1:6);Rm2=pm2(1:6); Rm3=Rm1‖Rm1‖xRm2‖Rm2‖; pm3=[Rm3,Lz]; c=hxpm3Lz; YY=Y;XX=X; Y=c.fw.P2Dx; X=c.fw.P2Dy; Ex=Y−Y2;Ey=X−X2 if(‖E‖<Ex) {pm6=[pm1(1:6)pm2(1:6),pm3(1:6),pm4(1:6),pm5(1:6),pm6(1:6),Lz,1]tm; break; } Output:Pm |
This is a straightforward way of determining the safest method for re-establishing a magnificent sign. The suggested study employs AWF to handle photo positions efficiently. The AWF is used to simplify the image with the most negligible fluctuation. Histograms are commonly balanced to improve the picture's uniformity. Histogram correction is a computer-assisted process for enhancing visual contrast. The prime consciousness esteems been fundamentally increased, i.e. the variety of image strength has been broadened. It creates a less close association with the development of district ties. As a result, following histogram correction, the usual picture contrast increases.
Nx=PTP | (1) |
Where x = 0, 1 & -1; P = number of pixel, TP = > Total number of pixels
If we want to calculate the histogram equalized image then we may follow the equation below
HQm,n=loge(1−x)(∑bm,nx=0(Nx)) | (2) |
Where loge(1−x) represent nearest neighbourhood integer value.
The similar representation with respect to the pixel intensity value is as follows:
∂P∂x(∫P0HQ(x)=∂P(x)∂y=∂P(P1x.P)δ/δP | (3) |
Finally, the probability distribution function (PDF) can be illustrated uniformity as ∂P∂x.
However, the outcome shows that the equalization method can smooth and improve histograms.
Image line boundary detection [47], also known as edge identification is critical in visual interpretation. Edges store massive amounts of info. As a result, the image size is drastically decreased, and less restorative material is combed through, preserving the image's essential core elements. An edge location is extensively employed in image separation because borders frequently appear at picture object boundaries. We incorporated the AROI by selecting from the six divided classes of the face regions. The restrictions of an aim depicted on an image or volume are calculated here. This method of dividing determines whether or not neighboring pixels of starting seeds should be added. They are combining pixels or sub-regions with a local tool. The simplest of these processes is pixel amalgamation, which starts with a collection of "Images" focuses and develops regions by linking pixels with common characteristics.
AROIS=S∑m,n∈Q2Vel.(Hpm,Lpn)P.logdm+δ∫em∂P | (4) |
Where AROIS is the separated AROI, Vel is the velocity measured by gradient value, Hpm,Lpn are the pixels values of low and high, logdm represents the spatial image size, δ signifies the image frequency coefficient, dm is the distance between two pixels.
We used a grey level co-occurrence matrix to understand texture characteristics. It represents the grey level. The input image's spatial information determines the probability of matches with values obtained. Irrespective of these traits, this method examines 18 texture attributes. We applied the image retrieval method by providing a sample image file that retrieval images from a vast dataset that seems to be adjacent. A range of infrared images is used to evaluate the algorithm's performance upon that texturing dataset. We propose sixteen texture classes. Each texture image is broken into six sub-images for all these examples. Many images are obtained depending on the distance between the queries among the data set. The image features are extracted and used in the investigation. Grey level co-occurrence matrix can choose the pixel frequency inside the individual result. The segmentation's directional value can then be used to erase the image attributes utilized in the segmentation. The following is an example of the grey layer co-occurrence matrix technique:
(m,n)=Vc(m,n,u,v)H∑m=1Vc(m,n,u,v) | (5) |
Where Vc is the vector, m; n; u; v; are the pixel values with respect to high and low; C is the image characteristics. We used a grey layer co-occurrence matrix to gain the various attributes for feature extraction. Finally, the features are chosen based on texture and color.
We incorporated the Enhanced Cuckoo Search Optimization (ECSO) algorithm [33] to gain the Adaptive Optimized classification (AOC). The Cuckoo Search algorithm (CSA) evaluates the inconsistency characteristics. Moreover, CSA is proposed to minimize the cost of network congestion we may face while collecting input images. In the ECSO algorithm, we have a guideline that we will place a random value on each node during selecting an arbitrary node on the cloudlets. The next cloudlet will move to the most vital node with the most significant number of images. The host server is static, and the cuckoo value possibly will be calculated in coincidence with the possibility of Pr[0; 1] by the host value. To overcome the challenge posed by network congestion and picture optimization variables, ECSO is a required method.
Algorithm 2 ECSO |
Input:Image_Features(Imgftr),Image_Cordinate(Imgcor) Output:ClassifiedValue(CLval) BegintocomputeTheRandomValue(RV) form=1:Range(Imgftr,1) forn=1:Range(Imgftr,1) Distance(m,n)=√(Imgftr(m,1)+Imgftr(m−1)+(Imgftr(m,1)−Imgftr(n,1))2 End Imgftr=ImgftrDistance(m,n) RV=(ImgftrDistance(m,n)) Classlabel(CL)=unique(node) L=lenth(CL) form=1:L T=mean_all_cloudlet_node(Img) X(Img,n)=−12xTxmean_all_cloudlet_node+log(m) X(Img,2:end)=T End |
Using linear discrimination analysis (LDA), the significance of respective image is considered after the infrared images are categorized. The results of the used classifier are given to the LDA. This approach measures the precision of picture statistics. The infrared images classification involves a remarkable role. It is usual practice to utilize LDA, a data inquiry tool, to reduce the dimension of numerous interconnected variables while maintaining the maximum amount of relevant data. As a type of image categorization, LDA analysis is valid. Matrix creation is demonstrated for use in operators performing image processing. We illustrated the LDA investigation processed by the following equations. The LDA function collects samples in the first phase to prepare properties from testing datasets. This dataset will be built from the six divided classes. The training datasets will be prepared and collected from the twelve regions. The LDA has the K classes (K < = 6) and R regions(R < = 12). There will be vector representation for each class. It will be represented in multidimensional space. In choosing the features from the regions, we applied the searching algorithm (ECSO)) as stated by Algorithm 2. The sensitive picture histogram is exhibited as shown in Figure 4. There are four images in this set, each of which has been converted utilizing sensitive histograms and various illuminations.
As shown in Figure 5, the features' weights (F.W.) must now be determined, establishing the image's critical pixels and weights. The importance of the features represented by the pixels is reflected in the weights. For example, a set of features represents an A.R. The method of locating POI and important A.R.s such as the nose-tip, RE, L.E., and regions of lips are depicted in Figure 5. Essential qualities are given larger weights to improve the dependability of information, which improves recognition.
The following phase separates the features. The Stochastic Face Shape Model (SFSM) and Optimized Principal Component Analysis (OPCA) were used to compare patterns on original and altered pictures (after the corrections of angles). From there, the features are retrieved and expressed in vector form. When the vectors are formed, phase-I is done. It is worth mentioning that an image-based sequence should be kept. The procedure of extracting facial features and preparing vectors is described below.
This section explains how to track facial features in video sequences and use the head posture estimation technique. Previous head pose estimate (P.E.) methods have relied on a stereo camera to provide correct 3-D information for head pose and make the necessary correction by rotation and normalization [35]. However, a head model's complex illustrations and exact starting value make a real-time clarification perfect. For example, the human Head is frequently revealed by an ellipsoid. Therefore, the cylindrical head model, CHM, calculates head position with 3-D positions on the corresponding sinusoidal surface. The 2-D to 3-D alteration scheme is utilized to gain the head posture statistics when a 2-D facial characteristic is traced in the individual video frame. Pose scaling with iterations is a two-dimensional to three-dimensional adaptation. Because the 2-D face characteristics have a variety of ramifications when recreating the position.
Our proposed method will detect the regions based on the feature weights of the corresponding regions. First, we smoothed the ocular region using L0 as per Gradient Minimization Method (GMM) to estimate the radius because it supports to remove noise on an image pixel. Then, we used the canny edge detector on the ocular areas. We get a few invalid edges here, which we can filter out with a filter. Finally, we collect the related information from the identified regions depending on E1 and E2.
E1=∑(RExCR) | (6) |
E2=√G2X+G2Y | (7) |
Where RE and CR represent the regions, we measure any region's radius based on the values of RE and CR. The pixels are measured horizontally and vertically. GX and GY, respectively, represent these two values. To detect any region, we have to lessen the intensity of that region and make the most of the strength or weightage of that region. The parameter τ controls the trade-off. That is
Uc,Vc=min(x,y){E1(U,V)−T.(∫x/5−x/5E1(U,V)δs+∫6x/54x/5E2(U,V)δs)} | (8) |
Where (Uc,Vc) is the coordinate of that region. It establishes the intervals between ([−15π, 15π] & [45π, 65π]) because the regions do not intersect with each other.
When G represents the identified region and p represents the vector of that region. To execute a mapping function, it transmits gaze data. The user will give a calibration procedure, and the region vectors will be registered. The mapping function links the vector of that region. Its coordinate's value is measured by (U, V). We applied the SLM, i.e., a simple linear SVM model. We also incorporated the polynomial model (PM) to establish the appropriate mapping functions. In addition, we applied the second-order and third-order polynomial functions in the calibration stage to gain better-synthesized results. As a result, a mapping algorithm can accurately be determined based on a segregated frame.
We completed our experiment based on our quantitative and qualitative evaluation algorithm. Therefore, we have organized our entire procedure into five regions, and the execution will be done in parallel in a time-sharing manner. In this testing phase, we are looking into forehead, eyes, nose-tip, and lip feature detection.
Recognition and feature extraction of an eye is a thrilling job. We propose the EPDF function to detect the eye area accurately. In this respect, the CK+ and NAVIE datasets are used in our experiment. The CK+ dataset has 1000 grayscale images using ten subjects and different lighting conditions with different scales. Our experiment has a higher recognition accuracy, i.e. 91.73% in RGB images and 92.39% accuracy in detection by twenty-one infrared images.
Recognition and feature extraction of an eye is a thrilling job. We propose the EPDF function to detect the eye area accurately. In this respect, the CK+ and NAVIE datasets are used in our experiment. The CK+ dataset has 1000 grayscale images using ten subjects and different lighting conditions with different scales. Our experiment has a higher recognition accuracy, i.e., 91.73% in RGB images and 92.39% accuracy in detection by twenty-one infrared images. FER is a new field that requires a high rate of identification for accuracy. This study combines the AROI and CNN methods for identifying and classifying face expressions [35]. Five optimized active regions [30] of interest (OAROI), namely- The forehead, LE, RE, Nose-tip, and Lip region, are taken into account. The mentioned three AROIs are trained for CNN. The ultimate classification outcome is gained utilizing a decision tree-level synthesized method. The figure of the applied method is shown in Figure 6.
The extents of OAR (optimized active regions) differ for each image. Therefore, the following steps are taken to achieve a similar result.
1. OAR of interests (OAROI) is measured by 120*80 pixels.
2. The features and their respective feature-weights of the used images have been learnt based on the CNN density layers and then by the sub-sampling layers simultaneously.
3. Lastly, it will pass through the Fully Connected Layer.
The fully-connected layers have applied the learned feature-weights to gain the classified expressions. Finally, one (1) is used as a hot code to mean that it will be labelled. There are five regions.
The hot code (1) size is five. An individual bit of the hot code resembles one class of regions. The hot index code (1) resulting in one area is showed. Here zero (0) bit implies false, and one (1) implies true. Figure 6 displays an example of the resultant region is the nose tip.
The final recognition result is attained in the last phase built on decision-tree level synthesized scheme. The decision-level fusion technique is being supported in the following way.
From the above, we can understand that if two or more classifiers need to be used to categorize the expression as the similar class i, i∈I, then the ultimate results after classification will be considered as j. Therefore, if two CNNs classifications are high, the synthesis result will be increased. On the other hand, when the results of five CNNs all will be dissimilar, we have to choose the outcome of the CNN among the five AROI (Forehead, LE, RE, Nose and Lip) based on the maximum accurateness, as it is shown in "Experimental Section".
FinalResult={i,if2ormoreCNNsclassifytheexpressionasi,i∈ITheresultofCNNforAROIofForehead,LeftEye,RightEye,Lip,otherwise |
The suggested CNN configuration is displayed in Figure 7. The CNN accepts 80*60 input infrared images. We incorporated 3 convolution layers. These three layers are being synthesized into subsampling layers. Sub-sampling layers correlate with the layers. The kernel sizes inside convolution layers, as shown in Figure 7, are divided into 5*5 and 2*2. The stride step point is fixed to 2 for each subsampling layer. As seen in Figure 7, the dimension and quantity of feature maps afterwards the convolution is 80*60*32. After that, an identical style is also applied to the remaining layers. In each of the two fully-connected layers, there are 1024 neurons. Finally, CNN generates the appropriate outputs based on the vote results of five different classed statements. The N (output), as shown in Figure 7, is calculated based on the images stored in the database.
The testing is accepted from ground truth and CK+ datasets. The performance in tracking accuracy is shown in Table 1. It is shown that CK+ image-set detection in the nose-tip region is always higher, i.e., 93.98%. At the same time, the accuracy in the forehead, left eye, and right eye religion is about 89.71%, 90.89%, and 92.57%, respectively. The lip region has better accuracy, i.e., 93.02%, than the eyes regions. The result of tracking by using infrared images became higher than RGB images. Using twenty-one infrared image sets, we achieved 94.35% on the nose tip region. In the same way, the accuracy in the forehead, LE, RE, and nose-tip regions are 91.21%, 91.47%, 93.31%, and 94.16%, respectively. So the average detection became 92.96% instead of 92.03%, which is comparatively higher than the RGB image.
Applied images | Forehead region accuracy | Left eye region accuracy | Right eye region accuracy | Nose tip region accuracy | Lip region detection accuracy |
RGB images | 89.71% | 90.89% | 92.57% | 93.98% | 93.02% |
Infrared images | 91.21% | 91.47% | 93.31% | 94.35% | 94.46% |
A quicker method is applied in recognizing facial temperature by applying the infrared image because the infrared image is self-sufficient to measure temperature quickly. Moreover, we incorporated an AI-based machine intelligence scheme that accelerated the diagnosis process. In choosing the active regions from the six classes and twelve regions, we applied the Enhanced Cuckoo Search Optimization (ECSO) algorithm by incorporating the wireless network using a cloudlets server so that processing becomes more accessible with minimal infrastructure, as stated by Algorithm 2. The devices (infrared or mobile camera) will help measure temperature first. Then, deep learning CNN will be applied during the record processing and analysis to yield better-synthesized results. Table 1 shows the results regarding the temperature measurement accuracy of five regions. We demonstrated that the recognition rate is always greater when using CK+ image-set identification within the nose-tip region, i.e., 93.98 %. The accuracy of the left and right eye religions is around 90.89 % and 92.57 %. The accuracy of the lip region is higher, at 93.02 %, than those of the eyes. The tracking outcome improved when infrared images were used instead of RGB images. We gained 94.35 % on the nose tip region by applying twenty-one infrared images.
Similarly, accuracy is 91.47 %, 93.31 %, and 94.16 % in the left eye, right eye, and nose-tip regions. As a result, the average recognition rate increased to 92.39 % from 91.73%, more significant than the RGB image set. For example, it is shown in Table 2 that by the CNN method applied to Forehead, LE, RE, Nose, and Lips region, we achieved an accuracy of 89.32%, 91.14%, 90.75%, 91.15%, and 93.32%, respectively. On the other hand, by incorporating our Decision tree-level synthesis method and ten-folded-validation technique applied to the Forehead, LE, RE, Nose, and Lips region, we achieved an accuracy of 91.36%, 94.14%, 93.49%, 94.44%, and 97.26% respectively. However, we achieved 3.29% greater accuracy by incorporating the "decision tree level synthesis scheme" and "ten-folded-validation method."
Investigators | Applied Method | Accuracy found by applied Datasets | |||||
JAFFE | CK+ | MNI | NVIE | SFEW | OWN | ||
S. Happy et al. [17] | SFP | 85.06% | 89.64% | ||||
Y. Liu et al. [21] | AUDN + 8-fold validity | 63.40% | 92.40% | ||||
L. Zhong, et al. [23] | CSPL | 89.89% | 73.53% | ||||
S.H. Lee, et al. [27] | SRC | 87.85% | 94.70% | 93.81% | |||
A.Mollahosseini et al. [29] | DNN | 93.20% | 77.90% |
||||
R. Saranya et al. [31] | CNN + LSTM with 10-fold validity | 81.60% |
56.68% |
95.21% | |||
A.T. Lopes et al. [32] | Active appearance method (AMM) Integrated AAM DAF Integrated DAF |
66.24% | |||||
71.72% | |||||||
79.16% | |||||||
91.52% | |||||||
J.A. Zhanfu et al. [39] | AAM + infrared thermal + KNN | 63.00% | |||||
Bayesian networks (BNs) | 85.51% | ||||||
P. Shen, et al. [34] | KNN | 73.00% | |||||
Our applied method | CNN method applied on | ||||||
1. Forehead region 2. Left-Eye region 3. Right-Eye region 4. Nose region 5. Lips region |
89.32% 91.14% 90.75% 91.15% 93.32% |
||||||
Our proposed method | Decision tree level synthesized technique and 10-fold validation method applied on 1. Forehead region 2. Left-Eye region 3. Right-Eye region 4. Nose region 5. Lips region |
91.36%94.14% 93.49% 94.44% 97.26% |
|||||
Our Achievement | We achieved 3.29% increased accuracy by incorporating decision tree level synthesized scheme and 10 folded validity. |
We don't have any funding source during our study.
The authors declare there is no conflict of interest.
[1] | L. Edelstein-Keshet, Mathematical Models in Biology, Society for Industrial and Applied Mathematics, 2005. https://doi.org/10.1137/1.9780898719147 |
[2] | A. J. Lotka, Science Progress in the Twentieth Century (1919–1933), Elem. Phys. Biol., 21 (1926), 341–343. |
[3] |
V. Volterra, Fluctuations in the abundance of a species considered mathematically, Nature, 118 (1926), 558–560. https://doi.org/10.1038/118558a0 doi: 10.1038/118558a0
![]() |
[4] |
X. Chen, X. Zhang, Dynamics of the predator-prey model with the sigmoid functional response, Stud. Appl. Math., 147 (2021), 300–318. https://doi.org/10.1111/sapm.12382 doi: 10.1111/sapm.12382
![]() |
[5] |
M. A. Shahzad, R. Ahmed, Dynamic complexity of a discrete predator-prey model with prey refuge and herd behavior, VFAST Trans. Math., 11 (2023), 194–216. https://doi.org/10.21015/vtm.v11i1.1512 doi: 10.21015/vtm.v11i1.1512
![]() |
[6] |
H. Deng, F. Chen, Z. Zhu, Z. Li, Dynamic behaviors of Lotka-Volterra predator-prey model incorporating predator cannibalism, Adv. Differ. Equations, 2019 (2019), 359. https://doi.org/10.1186/s13662-019-2289-8 doi: 10.1186/s13662-019-2289-8
![]() |
[7] |
R. Ahmed, Complex dynamics of a fractional-order predator-prey interaction with harvesting, Open J. Discrete Appl. Math., 3 (2020), 24–32. https://doi.org/10.30538/psrp-odam2020.0040 doi: 10.30538/psrp-odam2020.0040
![]() |
[8] |
S. Pal, N. Pal, S. Samanta, J. Chattopadhyay, Effect of hunting cooperation and fear in a predator-prey model, Ecol. Complex., 39 (2019), 100770. https://doi.org/10.1016/j.ecocom.2019.100770 doi: 10.1016/j.ecocom.2019.100770
![]() |
[9] |
Y. Ma, M. Zhao, Y. Du, Impact of the strong Allee effect in a predator-prey model, AIMS Math., 7 (2022), 16296–16314. https://doi.org/10.3934/math.2022890 doi: 10.3934/math.2022890
![]() |
[10] |
M. Yavuz, N. Sene, Stability analysis and numerical computation of the fractional predator-prey model with the harvesting rate, Fractal Fract., 4 (2020), 35. https://doi.org/10.3390/fractalfract4030035 doi: 10.3390/fractalfract4030035
![]() |
[11] |
J. Danane, M. Yavuz, M. Yildiz, Stochastic modeling of three-species prey-predator model driven by levy jump with mixed Holling-ii and Beddington-Deangelis functional responses, Fractal Fract., 7 (2023), 751. https://doi.org/10.3390/fractalfract7100751 doi: 10.3390/fractalfract7100751
![]() |
[12] |
A. Chatterjee, S. Pal, A predator-prey model for the optimal control of fish harvesting through the imposition of a tax, Int. J. Optim. Control Theor. Appl., 13 (2023), 68–80. https://doi.org/10.11121/ijocta.2023.1218 doi: 10.11121/ijocta.2023.1218
![]() |
[13] |
E. Gonzalez-Olivares, J. Mena-Lorca, A. Rojas-Palma, J. D. Flores, Dynamical complexities in the Leslie-Gower predator-prey model as consequences of the Allee effect on prey, Appl. Math. Modell., 35 (2011), 366–381. https://doi.org/10.1016/j.apm.2010.07.001 doi: 10.1016/j.apm.2010.07.001
![]() |
[14] |
M. Anacleto, C. Vidal, Dynamics of a delayed predator-prey model with Allee effect and Holling type ii functional response, Math. Methods Appl. Sci., 43 (2020), 5708–5728. https://doi.org/10.1002/mma.6307 doi: 10.1002/mma.6307
![]() |
[15] |
D. Sen, S. Ghorai, M. Banerjee, A. Morozov, Bifurcation analysis of the predator-prey model with the allee effect in the predator, J. Math. Biol., 84 (2022), 7. https://doi.org/10.1007/s00285-021-01707-x doi: 10.1007/s00285-021-01707-x
![]() |
[16] |
B. Mondal, S. Sarkar, U. Ghosh, Complex dynamics of a generalist predator-prey model with hunting cooperation in predator, Eur. Phys. J. Plus, 137 (2022), 43. https://doi.org/10.1140/epjp/s13360-021-02272-4 doi: 10.1140/epjp/s13360-021-02272-4
![]() |
[17] |
Y. Chou, Y. Chow, X. Hu, S. R. J. Jang, A Ricker-type predator-prey system with hunting cooperation in discrete time, Math. Comput. Simul., 190 (2021), 570–586. https://doi.org/10.1016/j.matcom.2021.06.003 doi: 10.1016/j.matcom.2021.06.003
![]() |
[18] |
M. Y. Hamada, T. El-Azab, H. El-Metwally, Allee effect in a Ricker type predator-prey model, J. Math. Comput. Sci., 29 (2023), 239–251. https://doi.org/10.22436/jmcs.029.03.03 doi: 10.22436/jmcs.029.03.03
![]() |
[19] |
M. Y. Hamada, T. El-Azab, H. El-Metwally, Bifurcation analysis of a two-dimensional discrete-time predator-prey model, Math. Methods Appl. Sci., 46 (2023), 4815–4833. https://doi.org/10.1002/mma.8807 doi: 10.1002/mma.8807
![]() |
[20] |
D. Ghosh, P. K. Santra, G. S. Mahapatra, A three-component prey-predator system with interval number, Math. Modell. Numer. Simul. Appl., 3 (2023), 1–16. https://doi.org/10.53391/mmnsa.1273908 doi: 10.53391/mmnsa.1273908
![]() |
[21] |
A. Q. Khan, I. Ahmad, H. S. Alayachi, M. S. M. Noorani, A. Khaliq, Discrete-time predator-prey model with flip bifurcation and chaos control, Math. Biosci. Eng., 17 (2020), 5944–5960. https://doi.org/10.3934/mbe.2020317 doi: 10.3934/mbe.2020317
![]() |
[22] |
Z. AlSharawi, S. Pal, N. Pal, J. Chattopadhyay, A discrete-time model with non-monotonic functional response and strong Allee effect in prey, J. Differ. Equations Appl., 26 (2020), 404–431. https://doi.org/10.1080/10236198.2020.1739276 doi: 10.1080/10236198.2020.1739276
![]() |
[23] |
R. Ahmed, A. Ahmad, N. Ali, Stability analysis and Neimark-Sacker bifurcation of a nonstandard finite difference scheme for Lotka-Volterra prey-predator model, Commun. Math. Biol. Neurosci., 2022 (2022), 61. https://doi.org/10.28919/cmbn/7534 doi: 10.28919/cmbn/7534
![]() |
[24] |
A. Khan, S. Bukhari, M. Almatrafi, Global dynamics, Neimark-Sacker bifurcation and hybrid control in a Leslie's prey-predator model, Alexandria Eng. J., 61 (2022), 11391–11404. https://doi.org/10.1016/j.aej.2022.04.042 doi: 10.1016/j.aej.2022.04.042
![]() |
[25] |
A. Suleman, R. Ahmed, F. S. Alshammari, N. A. Shah, Dynamic complexity of a slow-fast predator-prey model with herd behavior, AIMS Math., 8 (2023), 24446–24472. https://doi.org/10.3934/math.20231247 doi: 10.3934/math.20231247
![]() |
[26] |
Z. Wei, W. Tan, A. A. Elsadany, I. Moroz, Complexity and chaos control in a cournot duopoly model based on bounded rationality and relative profit maximization, Nonlinear Dyn., 111 (2023), 17561–17589. https://doi.org/10.1007/s11071-023-08782-3 doi: 10.1007/s11071-023-08782-3
![]() |
[27] |
L. Zhang, H. Jiang, Y. Liu, Z. Wei, Q. Bi, Controlling hidden dynamics and multistability of a class of two-dimensional maps via linear augmentation, Int. J. Bifurcation Chaos, 31 (2021), 2150047. https://doi.org/10.1142/s0218127421500474 doi: 10.1142/s0218127421500474
![]() |
[28] |
I. Džafić, R. A. Jabr, Discrete-time analytic signals for power system phasor and frequency tracking, Int. J. Electr. Power Energy Syst., 148 (2023), 109003. https://doi.org/10.1016/j.ijepes.2023.109003 doi: 10.1016/j.ijepes.2023.109003
![]() |
[29] |
E. Khalife, D. Abou Jaoude, M. Farhood, P. L. Garoche, Computation of invariant sets for discrete-time uncertain systems, Int. J. Rob. Nonlinear Control, 33 (2023), 8452–8474. https://doi.org/10.1002/rnc.6834 doi: 10.1002/rnc.6834
![]() |
[30] |
R. W. Ibrahim, K-symbol fractional order discrete-time models of lozi system, J. Differ. Equations Appl., 29 (2023), 1045–1064. https://doi.org/10.1080/10236198.2022.2158736 doi: 10.1080/10236198.2022.2158736
![]() |
[31] |
Z. U. A. Zafar, M. A. Khan, A. Akgül, M. Asiri, M. B. Riaz, The analysis of a new fractional model to the Zika virus infection with mutant, Heliyon, 10 (2024), e23390. https://doi.org/10.1016/j.heliyon.2023.e23390 doi: 10.1016/j.heliyon.2023.e23390
![]() |
[32] |
M. W. Yasin, N. Ahmed, M. S. Iqbal, A. Raza, M. Rafiq, E. M. T. Eldin, et al., Spatio-temporal numerical modeling of stochastic predator-prey model, Sci. Rep., 13 (2023) 1990. https://doi.org/10.1038/s41598-023-28324-6 doi: 10.1038/s41598-023-28324-6
![]() |
[33] |
P. Baydemir, H. Merdan, E. Karaoglu, G. Sucu, Complex dynamics of a discrete-time prey-predator system with Leslie type: Stability, bifurcation analyses and chaos, Int. J. Bifurcation Chaos, 30 (2020), 2050149. https://doi.org/10.1142/s0218127420501497 doi: 10.1142/s0218127420501497
![]() |
[34] |
N. Sk, B. Mondal, A. Sarkar, S. S. Santra, D. Baleanu, M. Altanji, Chaos emergence and dissipation in a three-species food web model with intraguild predation and cooperative hunting, AIMS Math., 9 (2024), 1023–1045. https://doi.org/10.3934/math.2024051 doi: 10.3934/math.2024051
![]() |
[35] |
P. A. Naik, Z. Eskandari, H. E. Shahraki, Flip and generalized flip bifurcations of a two-dimensional discrete-time chemical model, Math. Modell. Numer. Simul. Appl., 1 (2021), 95–101. https://doi.org/10.53391/mmnsa.2021.01.009 doi: 10.53391/mmnsa.2021.01.009
![]() |
[36] | Z. Eskandari, P. A. Naik, M. Yavuz, Dynamical behaviors of a discrete-time prey-predator model with harvesting effect on the predator, J. Appl. Anal. Comput., 14 (2024), 283–297. |
[37] |
Z. Eskandari, Z. Avazzadeh, R. K. Ghaziani, B. Li, Dynamics and bifurcations of a discrete-time Lotka-Volterra model using nonstandard finite difference discretization method, Math. Methods Appl. Sci., 2022 (2022). https://doi.org/10.1002/mma.8859 doi: 10.1002/mma.8859
![]() |
[38] |
P. A. Naik, Z. Eskandari, Z. Avazzadeh, J. Zu, Multiple bifurcations of a discrete-time prey-predator model with mixed functional response, Int. J. Bifurcation Chaos, 32 (2022), 2250050. https://doi.org/10.1142/s021812742250050x doi: 10.1142/s021812742250050x
![]() |
[39] |
P. A. Naik, Z. Eskandari, A. Madzvamuse, Z. Avazzadeh, J. Zu, Complex dynamics of a discrete-time seasonally forced SIR epidemic model, Math. Methods Appl. Sci., 46 (2023), 7045–7059. https://doi.org/10.1002/mma.8955 doi: 10.1002/mma.8955
![]() |
[40] |
P. A. Naik, Z. Eskandari, H. E. Shahkari, K. M. Owolabi, Bifurcation analysis of a discrete-time prey-predator model, Bull. Biomath., 1 (2023), 111–123. https://doi.org/10.59292/bulletinbiomath.2023006 doi: 10.59292/bulletinbiomath.2023006
![]() |
[41] |
W. Ou, C. Xu, Q. Cui, Y. Pang, Z. Liu, J. Shen, et al., Hopf bifurcation exploration and control technique in a predator-prey system incorporating delay, AIMS Math., 9 (2024), 1622–1651. http://doi.org/10.3934/math.2024080 doi: 10.3934/math.2024080
![]() |
[42] |
Y. Li, F. Zhang, X. Zhuo, Flip bifurcation of a discrete predator-prey model with modified Leslie-Gower and Holling-type iii schemes, Math. Biosci. Eng., 17 (2020), 2003–2015. https://doi.org/10.3934/mbe.2020106 doi: 10.3934/mbe.2020106
![]() |
[43] |
B. Rajni, Ghosh, Multistability, chaos and mean population density in a discrete-time predator-prey system, Chaos Solitons Fractals, 162 (2022), 112497. https://doi.org/10.1016/j.chaos.2022.112497 doi: 10.1016/j.chaos.2022.112497
![]() |
[44] |
A. Yousef, A. M. Algelany, A. Elsadany, Codimension one and codimension two bifurcations in a discrete Kolmogorov type predator-prey model, J. Comput. Appl. Math., 428 (2023), 115171. https://doi.org/10.1016/j.cam.2023.115171 doi: 10.1016/j.cam.2023.115171
![]() |
[45] |
A. Q. Khan, I. M. Alsulami, Complicate dynamical analysis of a discrete predator-prey model with a prey refuge, AIMS Math., 8 (2023), 15035–15057. https://doi.org/10.3934/math.2023768 doi: 10.3934/math.2023768
![]() |
[46] |
A. Tassaddiq, M. S. Shabbir, Q. Din, H. Naaz, Discretization, bifurcation, and control for a class of predator-prey interactions, Fractal Fract., 6 (2022), 31. https://doi.org/10.3390/fractalfract6010031 doi: 10.3390/fractalfract6010031
![]() |
[47] |
Q. Zhou, F. Chen, S. Lin, Complex dynamics analysis of a discrete amensalism system with a cover for the first species, Axioms, 11 (2022), 365. https://doi.org/10.3390/axioms11080365 doi: 10.3390/axioms11080365
![]() |
[48] |
D. Mukherjee, Global stability and bifurcation analysis in a discrete-time two prey one predator model with help, Int. J. Modell. Simul., 43 (2023), 752–763. https://doi.org/10.1080/02286203.2022.2121676 doi: 10.1080/02286203.2022.2121676
![]() |
[49] |
S. Lin, F. Chen, Z. Li, L. Chen, Complex dynamic behaviors of a modified discrete Leslie-Gower predator-prey system with fear effect on prey species, Axioms, 11 (2022), 520. https://doi.org/10.3390/axioms11100520 doi: 10.3390/axioms11100520
![]() |
[50] |
P. A. Naik, Z. Eskandari, M. Yavuz, J. Zu, Complex dynamics of a discrete-time Bazykin-Berezovskaya prey-predator model with a strong Allee effect, J. Comput. Appl. Math., 413 (2022), 114401. https://doi.org/10.1016/j.cam.2022.114401 doi: 10.1016/j.cam.2022.114401
![]() |
[51] |
R. Ahmed, M. Rafaqat, I. Siddique, M. A. Arefin, Complex dynamics and chaos control of a discrete-time predator-prey model, Discrete Dyn. Nat. Soc., 2023 (2023), 8873611. https://doi.org/10.1155/2023/8873611 doi: 10.1155/2023/8873611
![]() |
[52] |
M. Y. Hamada, T. El-Azab, H. El-Metwally, Bifurcations and dynamics of a discrete predator-prey model of Ricker type, J. Appl. Math. Comput., 69 (2023), 113–135. https://doi.org/10.1007/s12190-022-01737-8 doi: 10.1007/s12190-022-01737-8
![]() |
[53] |
E. Gonzalez-Olivares, R. Ramos-Jiliberto, Dynamic consequences of prey refuges in a simple model system: more prey, fewer predators and enhanced stability, Ecol. Modell., 166 (2003), 135–146. https://doi.org/10.1016/s0304-3800(03)00131-5 doi: 10.1016/s0304-3800(03)00131-5
![]() |
[54] |
Z. Ma, F. Chen, C. Wu, W. Chen, Dynamic behaviors of a Lotka-Volterra predator-prey model incorporating a prey refuge and predator mutual interference, Appl. Math. Comput., 219 (2013), 7945–7953. https://doi.org/10.1016/j.amc.2013.02.033 doi: 10.1016/j.amc.2013.02.033
![]() |
[55] |
F. Chen, L. Chen, X. Xie, On a Leslie-Gower predator-prey model incorporating a prey refuge, Nonlinear Anal. Real World Appl., 10 (2009), 2905–2908. https://doi.org/10.1016/j.nonrwa.2008.09.009 doi: 10.1016/j.nonrwa.2008.09.009
![]() |
[56] |
H. Molla, S. Sarwardi, S. R. Smith, M. Haque, Dynamics of adding variable prey refuge and an Allee effect to a predator-prey model, Alexandria Eng. J., 61 (2022), 4175–4188. https://doi.org/10.1016/j.aej.2021.09.039 doi: 10.1016/j.aej.2021.09.039
![]() |
[57] |
D. Mukherjee, The effect of refuge and immigration in a predator-prey system in the presence of a competitor for the prey, Nonlinear Anal. Real World Appl., 31 (2016), 277–287. https://doi.org/10.1016/j.nonrwa.2016.02.004 doi: 10.1016/j.nonrwa.2016.02.004
![]() |
[58] |
J. Ghosh, B. Sahoo, S. Poria, Prey-predator dynamics with prey refuge providing additional food to predator, Chaos Solitons Fractals, 96 (2017), 110–119. https://doi.org/10.1016/j.chaos.2017.01.010 doi: 10.1016/j.chaos.2017.01.010
![]() |
[59] |
R. Ahmed, J. Mushtaq, S. Saher, H. M. A. Saeed, Dynamic analysis of a predator-prey model with Holling type-ii functional response and prey refuge by using a NSFD scheme, Commun. Math. Biol. Neurosci., 2022 (2022), 111. https://doi.org/10.28919/cmbn/7735 doi: 10.28919/cmbn/7735
![]() |
[60] |
Q. Shu, J. Xie, Stability and bifurcation analysis of discrete predator-prey model with nonlinear prey harvesting and prey refuge, Math. Methods Appl. Sci., 45 (2022), 3589–3604. https://doi.org/10.1002/mma.8005 doi: 10.1002/mma.8005
![]() |
[61] |
R. Ahmed, M. S. Yazdani, Complex dynamics of a discrete-time model with prey refuge and Holling type-ii functional response, J. Math. Comput. Sci., 12 (2022), 113. https://doi.org/10.28919/jmcs/7205 doi: 10.28919/jmcs/7205
![]() |
[62] |
W. Lu, Y. Xia, Multiple periodicity in a predator-prey model with prey refuge, Mathematics, 10 (2022), 421. https://doi.org/10.3390/math10030421 doi: 10.3390/math10030421
![]() |
[63] |
B. Hong, C. Zhang, Neimark-Sacker bifurcation of a discrete-time predator-prey model with prey refuge effect, Mathematics, 11 (2023), 1399. https://doi.org/10.3390/math11061399 doi: 10.3390/math11061399
![]() |
[64] |
Z. Ma, W. Li, Y. Zhao, W. Wang, H. Zhang, Z. Li, Effects of prey refuges on a predator-prey model with a class of functional responses: The role of refuges, Math. Biosci., 218 (2009), 73–79. https://doi.org/10.1016/j.mbs.2008.12.008 doi: 10.1016/j.mbs.2008.12.008
![]() |
[65] |
S. Rana, A. R. Bhowmick, S. Bhattacharya, Impact of Prey Refuge on a Discrete Time Predator-Prey System with Allee Effect, Int. J. Bifurcation Chaos, 24 (2014), 1450106. https://doi.org/10.1142/S0218127414501065 doi: 10.1142/S0218127414501065
![]() |
[66] |
M. H. Mohd, M. S. M. Noorani, M. F. F. A. Kadir, N. Zakariya, Contrasting effects of prey refuge on biodiversity of species, Int. J. Nonlinear Sci. Numer. Simul., 24 (2021), 811–829. https://doi.org/10.1515/ijnsns-2021-0213 doi: 10.1515/ijnsns-2021-0213
![]() |
[67] | A. C. J. Luo, Regularity and Complexity in Dynamical Systems, Springer, New York, 2012. https://doi.org/10.1007/978-1-4614-1524-4 |
[68] | J. Guckenheimer, P. Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, Springer, New York, 1983. https://doi.org/10.1007/978-1-4612-1140-2 |
[69] | S. Wiggins, M. Golubitsky, Introduction to Applied Nonlinear Dynamical Systems and Chaos, Springer, New York, 2003. https://doi.org/10.1007/b97481 |
[70] |
S. M. S. Rana, U. Kulsum, Bifurcation analysis and chaos control in a discrete-time predator-prey system of Leslie type with simplified Holling type iv functional response, Discrete Dyn. Nat. Soc., 2017 (2017), 9705985. https://doi.org/10.1155/2017/9705985 doi: 10.1155/2017/9705985
![]() |
[71] |
Y. Zhou, W. Sun, Y. Song, Z. Zheng, J. Lu, S. Chen, Hopf bifurcation analysis of a predator-prey model with Holling-ii type functional response and a prey refuge, Nonlinear Dyn., 97 (2019), 1439–1450. https://doi.org/10.1007/s11071-019-05063-w doi: 10.1007/s11071-019-05063-w
![]() |
[72] |
P. Chakraborty, U. Ghosh, S. Sarkar, Stability and bifurcation analysis of a discrete prey-predator model with square-root functional response and optimal harvesting, J. Biol. Syst., 28 (2020), 91–110. https://doi.org/10.1142/s0218339020500047 doi: 10.1142/s0218339020500047
![]() |
[73] |
M. B. Ghori, P. A. Naik, J. Zu, Z. Eskandari, M. Naik, Global dynamics and bifurcation analysis of a fractional-order SEIR epidemic model with saturation incidence rate, Math. Methods Appl. Sci., 45 (2022), 3665–3688. https://doi.org/10.1002/mma.8010 doi: 10.1002/mma.8010
![]() |
[74] |
K. Fang, Z. Zhu, F. Chen, Z. Li, Qualitative and bifurcation analysis in a Leslie-Gower model with Allee effect, Qual. Theory Dyn. Syst., 21 (2022), 86. https://doi.org/10.1007/s12346-022-00591-0 doi: 10.1007/s12346-022-00591-0
![]() |
[75] |
D. Mua, C. Xub, Z. Liua, Y. Panga, Further insight Into bifurcation and hybrid control tactics of a chlorine Dioxide-Iodine-Malonic Acid chemical reaction model incorporating delays, MATCH Commun. Math. Comput. Chem., 89 (2023), 529–566. https://doi.org/10.46793/match.89-3.529m doi: 10.46793/match.89-3.529m
![]() |
[76] |
C. Xu, Z. Liu, P. Li, J. Yan, L. Yao, Bifurcation mechanism for fractional-order three-triangle multi-delayed neural networks, Neural Process. Lett., 55 (2023), 6125–6151. https://doi.org/10.1007/s11063-022-11130-y doi: 10.1007/s11063-022-11130-y
![]() |
[77] |
C. Xu, X. Cui, P. Li, J. Yan, L. Yao, Exploration on dynamics in a discrete predator-prey competitive model involving feedback controls, J. Biol. Dyn., 17 (2023), 2220349. https://doi.org/10.1080/17513758.2023.2220349 doi: 10.1080/17513758.2023.2220349
![]() |
[78] |
P. Li, Y. Lu, C. Xu, J. Ren, Insight into Hopf bifurcation and control methods in fractional order BAM neural networks incorporating symmetric structure and delay, Cognit. Comput., 15 (2023), 1825–1867. https://doi.org/10.1007/s12559-023-10155-2 doi: 10.1007/s12559-023-10155-2
![]() |
[79] |
C. Xu, Q. Cui, Z. Liu, Y. Pan, X. H. Cui, W. Ou, et al., Extended hybrid controller design of bifurcation in a delayed chemostat model, MATCH Commun. Math. Comput. Chem., 90 (2023), 609–648. https://doi.org/10.46793/match.90-3.609X doi: 10.46793/match.90-3.609X
![]() |
[80] |
P. Li, X. Peng, C. Xu, L. Han, S. Shi, Novel extended mixed controller design for bifurcation control of fractional-order Myc/E2F/miR-17-92 network model concerning delay, Math. Methods Appl. Sci., 46 (2023), 18878–18898. https://doi.org/10.1002/mma.9597 doi: 10.1002/mma.9597
![]() |
[81] |
Y. Zhang, P. Li, C., Xu, X. Peng, R. Qiao, Investigating the effects of a fractional operator on the evolution of the ENSO model: Bifurcations, stability and numerical analysis, Fractal Fract., 7 (2023), 602. https://doi.org/10.3390/fractalfract7080602 doi: 10.3390/fractalfract7080602
![]() |
[82] |
W. J. McShea, Ecology and management of white-tailed deer in a changing world, Ann. New York Acad. Sci., 1249 (2012), 45–56. https://doi.org/10.1111/j.1749-6632.2011.06376.x doi: 10.1111/j.1749-6632.2011.06376.x
![]() |
[83] |
F. J. Kroon, P. Thorburn, B. Schaffelke, S. Whitten, Towards protecting the Great Barrier Reef from land-based pollution, Global Change Biol., 22 (6) (2016), 1985–2002. https://doi.org/10.1111/gcb.13262 doi: 10.1111/gcb.13262
![]() |
[84] | C. Fabricius, E. Koch, S. Turner, H. Magome, Rights Resources and Rural Development: Community-Based Natural Resource Management in Southern Africa, Routledge, 2004. https://doi.org/10.4324/9781849772433 |
[85] | G. Chen, X. Dong, From Chaos to Order: Methodologies, Perspectives and Applications, World Scientific, 1998. https://doi.org/10.1142/3033 |
[86] |
C. Lei, X. Han, W. Wang, Bifurcation analysis and chaos control of a discrete-time prey-predator model with fear factor, Math. Biosci. Eng., 19 (2022), 6659–6679. https://doi.org/10.3934/mbe.2022313 doi: 10.3934/mbe.2022313
![]() |
[87] |
X. S. Luo, G. Chen, B. H. Wang, J. Q. Fang, Hybrid control of period-doubling bifurcation and chaos in discrete nonlinear dynamical systems, Chaos Solitons Fractals, 18 (2003), 775–783. https://doi.org/10.1016/s0960-0779(03)00028-6 doi: 10.1016/s0960-0779(03)00028-6
![]() |
1. | Mohammed Hassan Osman Abdalraheem, Mohammad Alamgir Hossain, Alfadil Ahmed Hamdan, M Tahar Kechadi, Suresh Limkar, 2023, Estimation of Facial Emotion Based on Landmark Points by Applying Artificial Intelligence and Machine Learning, 979-8-3503-0426-8, 1, 10.1109/ICCUBEA58933.2023.10392279 | |
2. | Pradnya Borkar, Vishal Ashok Wankhede, Deepak T. Mane, Suresh Limkar, J. V. N. Ramesh, Samir N. Ajani, RETRACTED ARTICLE: Deep learning and image processing-based early detection of Alzheimer disease in cognitively normal individuals, 2023, 1432-7643, 10.1007/s00500-023-08615-w | |
3. | Saad Mamoun Abdel Rahman, Nasrullah Armi, Mohammed Eltahir Abdelhag, Sherif Tawfik Amin, Hassan Abu Eishah, 2023, Rapid and Efficient Facial Landmark Identification by Light and High Resolution Network using Artificial Intelligence, 979-8-3503-4389-2, 320, 10.1109/ICRAMET60171.2023.10366566 | |
4. | Mohammad Alamgir Hossain, Mohammed Hassan Osman, Alfadil Ahmed Hamdan, Mohammed Eltahir Abdelhag, M Tahar Kechadi, 2023, FERLP: Facial Emotion Recognition Based on Landmark Points using Artificial Intelligence and Machine Learning, 979-8-3503-3509-5, 1, 10.1109/ICCCNT56998.2023.10308392 | |
5. | Abdullah M. Sheneamer, Malik H. Halawi, Meshari H. Al-Qahtani, Priyadarsan Parida, A hybrid human recognition framework using machine learning and deep neural networks, 2024, 19, 1932-6203, e0300614, 10.1371/journal.pone.0300614 | |
6. | Abdoh Jabbari, 2023, Tracking and Analysis of Pilgrims' Movement Throughout Umrah and Hajj Applying Artificial Intelligence and Machine Learning, 979-8-3503-0426-8, 1, 10.1109/ICCUBEA58933.2023.10392217 | |
7. | Mohammed Hameed Alhameed, 2024, Adaptive Scheduling Architecture for IoT Environment, 9798400716379, 295, 10.1145/3674029.3674075 | |
8. | Mohammed Alhameed, Mohammad Alamgir Hossain, 2023, Rapid Detection of Pilgrims Whereabouts During Hajj and Umrah by Wireless Communication Framework : An application AI and Deep Learning, 978-1-6654-7524-2, 1, 10.1109/ESCI56872.2023.10099969 | |
9. | Dwarakanath B, Pandimurugan V, Mohandas R, Sambath M, Baiju B.V, Chinnasamy A, Detecting the symptoms of COVID-19 during pandemic environment using smart spectacle thermal images and deep capsule networks, 2024, 1573-7721, 10.1007/s11042-024-18812-w | |
10. | Suresh Limkar, Mohammad Alamgir Hossain, Sherif Tawfik Amin, Yasir Ahmad, 2025, 9781394256044, 185, 10.1002/9781394256075.ch10 | |
11. | Mohammad Mazedul Huq Talukdar, Alfadil Ahmed Hamdan, Yagoub Abbker Adam, Mohammad Alamgir Hossain, Mohammed Hassan Osman, Mohammad Khamruddin, Mohammed Eltahir Abdelhag, 2024, Enhanced Approach to Predict Early Stage Chronic Kidney Disease, 979-8-3315-4310-5, 127, 10.1109/AGERS65212.2024.10932874 |
Algorithm 1. Pose estimation (PE) algorithm |
Pm=PE(P2D,P3D,fw,f) Input:P2D,P3D,fw,f X=Q[P2D];c=R[X] Y=P2Dx/f;X=P2Dy/f h=[P3D,c];o=Pinv(h) while(true) {j=oxY;k=oxX; Lz=1√(1jx1k) pm1=jxLz; pm2=kxLz; Rm1=pm1(1:6);Rm2=pm2(1:6); Rm3=Rm1‖Rm1‖xRm2‖Rm2‖; pm3=[Rm3,Lz]; c=hxpm3Lz; YY=Y;XX=X; Y=c.fw.P2Dx; X=c.fw.P2Dy; Ex=Y−Y2;Ey=X−X2 if(‖E‖<Ex) {pm6=[pm1(1:6)pm2(1:6),pm3(1:6),pm4(1:6),pm5(1:6),pm6(1:6),Lz,1]tm; break; } Output:Pm |
Algorithm 2 ECSO |
Input:Image_Features(Imgftr),Image_Cordinate(Imgcor) Output:ClassifiedValue(CLval) BegintocomputeTheRandomValue(RV) form=1:Range(Imgftr,1) forn=1:Range(Imgftr,1) Distance(m,n)=√(Imgftr(m,1)+Imgftr(m−1)+(Imgftr(m,1)−Imgftr(n,1))2 End Imgftr=ImgftrDistance(m,n) RV=(ImgftrDistance(m,n)) Classlabel(CL)=unique(node) L=lenth(CL) form=1:L T=mean_all_cloudlet_node(Img) X(Img,n)=−12xTxmean_all_cloudlet_node+log(m) X(Img,2:end)=T End |
Applied images | Forehead region accuracy | Left eye region accuracy | Right eye region accuracy | Nose tip region accuracy | Lip region detection accuracy |
RGB images | 89.71% | 90.89% | 92.57% | 93.98% | 93.02% |
Infrared images | 91.21% | 91.47% | 93.31% | 94.35% | 94.46% |
Investigators | Applied Method | Accuracy found by applied Datasets | |||||
JAFFE | CK+ | MNI | NVIE | SFEW | OWN | ||
S. Happy et al. [17] | SFP | 85.06% | 89.64% | ||||
Y. Liu et al. [21] | AUDN + 8-fold validity | 63.40% | 92.40% | ||||
L. Zhong, et al. [23] | CSPL | 89.89% | 73.53% | ||||
S.H. Lee, et al. [27] | SRC | 87.85% | 94.70% | 93.81% | |||
A.Mollahosseini et al. [29] | DNN | 93.20% | 77.90% |
||||
R. Saranya et al. [31] | CNN + LSTM with 10-fold validity | 81.60% |
56.68% |
95.21% | |||
A.T. Lopes et al. [32] | Active appearance method (AMM) Integrated AAM DAF Integrated DAF |
66.24% | |||||
71.72% | |||||||
79.16% | |||||||
91.52% | |||||||
J.A. Zhanfu et al. [39] | AAM + infrared thermal + KNN | 63.00% | |||||
Bayesian networks (BNs) | 85.51% | ||||||
P. Shen, et al. [34] | KNN | 73.00% | |||||
Our applied method | CNN method applied on | ||||||
1. Forehead region 2. Left-Eye region 3. Right-Eye region 4. Nose region 5. Lips region |
89.32% 91.14% 90.75% 91.15% 93.32% |
||||||
Our proposed method | Decision tree level synthesized technique and 10-fold validation method applied on 1. Forehead region 2. Left-Eye region 3. Right-Eye region 4. Nose region 5. Lips region |
91.36%94.14% 93.49% 94.44% 97.26% |
|||||
Our Achievement | We achieved 3.29% increased accuracy by incorporating decision tree level synthesized scheme and 10 folded validity. |
Algorithm 1. Pose estimation (PE) algorithm |
Pm=PE(P2D,P3D,fw,f) Input:P2D,P3D,fw,f X=Q[P2D];c=R[X] Y=P2Dx/f;X=P2Dy/f h=[P3D,c];o=Pinv(h) while(true) {j=oxY;k=oxX; Lz=1√(1jx1k) pm1=jxLz; pm2=kxLz; Rm1=pm1(1:6);Rm2=pm2(1:6); Rm3=Rm1‖Rm1‖xRm2‖Rm2‖; pm3=[Rm3,Lz]; c=hxpm3Lz; YY=Y;XX=X; Y=c.fw.P2Dx; X=c.fw.P2Dy; Ex=Y−Y2;Ey=X−X2 if(‖E‖<Ex) {pm6=[pm1(1:6)pm2(1:6),pm3(1:6),pm4(1:6),pm5(1:6),pm6(1:6),Lz,1]tm; break; } Output:Pm |
Algorithm 2 ECSO |
Input:Image_Features(Imgftr),Image_Cordinate(Imgcor) Output:ClassifiedValue(CLval) BegintocomputeTheRandomValue(RV) form=1:Range(Imgftr,1) forn=1:Range(Imgftr,1) Distance(m,n)=√(Imgftr(m,1)+Imgftr(m−1)+(Imgftr(m,1)−Imgftr(n,1))2 End Imgftr=ImgftrDistance(m,n) RV=(ImgftrDistance(m,n)) Classlabel(CL)=unique(node) L=lenth(CL) form=1:L T=mean_all_cloudlet_node(Img) X(Img,n)=−12xTxmean_all_cloudlet_node+log(m) X(Img,2:end)=T End |
Applied images | Forehead region accuracy | Left eye region accuracy | Right eye region accuracy | Nose tip region accuracy | Lip region detection accuracy |
RGB images | 89.71% | 90.89% | 92.57% | 93.98% | 93.02% |
Infrared images | 91.21% | 91.47% | 93.31% | 94.35% | 94.46% |
Investigators | Applied Method | Accuracy found by applied Datasets | |||||
JAFFE | CK+ | MNI | NVIE | SFEW | OWN | ||
S. Happy et al. [17] | SFP | 85.06% | 89.64% | ||||
Y. Liu et al. [21] | AUDN + 8-fold validity | 63.40% | 92.40% | ||||
L. Zhong, et al. [23] | CSPL | 89.89% | 73.53% | ||||
S.H. Lee, et al. [27] | SRC | 87.85% | 94.70% | 93.81% | |||
A.Mollahosseini et al. [29] | DNN | 93.20% | 77.90% |
||||
R. Saranya et al. [31] | CNN + LSTM with 10-fold validity | 81.60% |
56.68% |
95.21% | |||
A.T. Lopes et al. [32] | Active appearance method (AMM) Integrated AAM DAF Integrated DAF |
66.24% | |||||
71.72% | |||||||
79.16% | |||||||
91.52% | |||||||
J.A. Zhanfu et al. [39] | AAM + infrared thermal + KNN | 63.00% | |||||
Bayesian networks (BNs) | 85.51% | ||||||
P. Shen, et al. [34] | KNN | 73.00% | |||||
Our applied method | CNN method applied on | ||||||
1. Forehead region 2. Left-Eye region 3. Right-Eye region 4. Nose region 5. Lips region |
89.32% 91.14% 90.75% 91.15% 93.32% |
||||||
Our proposed method | Decision tree level synthesized technique and 10-fold validation method applied on 1. Forehead region 2. Left-Eye region 3. Right-Eye region 4. Nose region 5. Lips region |
91.36%94.14% 93.49% 94.44% 97.26% |
|||||
Our Achievement | We achieved 3.29% increased accuracy by incorporating decision tree level synthesized scheme and 10 folded validity. |