
The quantity of scientific images associated with patient care has increased markedly in recent years due to the rapid development of hospitals and research facilities. Every hospital generates more medical photographs, resulting in more than 10 GB of data per day being produced by a single image appliance. Software is used extensively to scan and locate diagnostic photographs to identify patient's precise information, which can be valuable for medical science research and advancement. An image recovery system is used to meet this need. This paper suggests an optimized classifier framework focused on a hybrid adaptive neuro-fuzzy approach to accomplish this goal. In the user query, similarity measurement, and the image content, fuzzy sets represent the vagueness that occurs in such data sets. The optimized classifying method 'hybrid adaptive neuro-fuzzy is enhanced with the improved cuckoo search optimization. Score values are determined by utilizing the linear discriminant analysis (LDA) of such classified images. The preliminary findings indicate that the proposed approach can be more reliable and effective at estimation than can existing approaches.
Citation: Janarthanan R, Eshrag A. Refaee, Selvakumar K, Mohammad Alamgir Hossain, Rajkumar Soundrapandiyan, Marimuthu Karuppiah. Biomedical image retrieval using adaptive neuro-fuzzy optimized classifier system[J]. Mathematical Biosciences and Engineering, 2022, 19(8): 8132-8151. doi: 10.3934/mbe.2022380
[1] | Pavithra Latha Kumaresan, Subbulakshmi Pasupathi, Sindhia Lingaswamy, Sreesharmila Thangaswamy, Vimal Shunmuganathan, Danilo Pelusi . Fruit-Fly optimization based feature integration in image retrieval. Mathematical Biosciences and Engineering, 2021, 18(5): 6178-6197. doi: 10.3934/mbe.2021309 |
[2] | Yevgeniy Bodyanskiy, Olha Chala, Natalia Kasatkina, Iryna Pliss . Modified generalized neo-fuzzy system with combined online fast learning in medical diagnostic task for situations of information deficit. Mathematical Biosciences and Engineering, 2022, 19(8): 8003-8018. doi: 10.3934/mbe.2022374 |
[3] | Pengcheng Zhang, Yi Liu, Zhiguo Gui, Yang Chen, Lina Jia . A region-adaptive non-local denoising algorithm for low-dose computed tomography images. Mathematical Biosciences and Engineering, 2023, 20(2): 2831-2846. doi: 10.3934/mbe.2023133 |
[4] | Yong Xiong, Lin Pan, Min Xiao, Han Xiao . Motion control and path optimization of intelligent AUV using fuzzy adaptive PID and improved genetic algorithm. Mathematical Biosciences and Engineering, 2023, 20(5): 9208-9245. doi: 10.3934/mbe.2023404 |
[5] | Hilly Gohain Baruah, Vijay Kumar Nath, Deepika Hazarika, Rakcinpha Hatibaruah . Local bit-plane neighbour dissimilarity pattern in non-subsampled shearlet transform domain for bio-medical image retrieval. Mathematical Biosciences and Engineering, 2022, 19(2): 1609-1632. doi: 10.3934/mbe.2022075 |
[6] | Jinyun Jiang, Jianchen Cai, Qile Zhang, Kun Lan, Xiaoliang Jiang, Jun Wu . Group theoretic particle swarm optimization for gray-level medical image enhancement. Mathematical Biosciences and Engineering, 2023, 20(6): 10479-10494. doi: 10.3934/mbe.2023462 |
[7] | Qing Luo, Xiang Gao, Bo Jiang, Xueting Yan, Wanyuan Liu, Junchao Ge . A review of fine-grained sketch image retrieval based on deep learning. Mathematical Biosciences and Engineering, 2023, 20(12): 21186-21210. doi: 10.3934/mbe.2023937 |
[8] | M Nisha, T Kannan, K Sivasankari . A semi-supervised deep neuro-fuzzy iterative learning system for automatic segmentation of hippocampus brain MRI. Mathematical Biosciences and Engineering, 2024, 21(12): 7830-7853. doi: 10.3934/mbe.2024344 |
[9] | Xuwen Wang, Yu Zhang, Zhen Guo, Jiao Li . Identifying concepts from medical images via transfer learning and image retrieval. Mathematical Biosciences and Engineering, 2019, 16(4): 1978-1991. doi: 10.3934/mbe.2019097 |
[10] | Chaofeng Ren, Xiaodong Zhi, Yuchi Pu, Fuqiang Zhang . A multi-scale UAV image matching method applied to large-scale landslide reconstruction. Mathematical Biosciences and Engineering, 2021, 18(3): 2274-2287. doi: 10.3934/mbe.2021115 |
The quantity of scientific images associated with patient care has increased markedly in recent years due to the rapid development of hospitals and research facilities. Every hospital generates more medical photographs, resulting in more than 10 GB of data per day being produced by a single image appliance. Software is used extensively to scan and locate diagnostic photographs to identify patient's precise information, which can be valuable for medical science research and advancement. An image recovery system is used to meet this need. This paper suggests an optimized classifier framework focused on a hybrid adaptive neuro-fuzzy approach to accomplish this goal. In the user query, similarity measurement, and the image content, fuzzy sets represent the vagueness that occurs in such data sets. The optimized classifying method 'hybrid adaptive neuro-fuzzy is enhanced with the improved cuckoo search optimization. Score values are determined by utilizing the linear discriminant analysis (LDA) of such classified images. The preliminary findings indicate that the proposed approach can be more reliable and effective at estimation than can existing approaches.
Technical innovation has lead to a massive number of digital images being created every day, creating vast image libraries both offline and online. Biomedical information recovery has been a feature of considerable importance in essential areas, such as disease area recognition, related evidence analysis and description in medical image collection, and organization. A successful high-tech biomedical picture recovery program is important to enable medical personnel to accomplish their activities quickly and more reliably than necessary. Thus, there is considerable interest in building servers or machines that can be used to store and retrieve data on demand in hospitals, doctor's offices, diagnostic imaging, and mobile service providers. Physicians have also made such data more widely accessible to detect patient illness with diagnostic pictures, such as X-ray and computed tomography (CT) scans. The image recovery method scans the images depending on the shape, type, and spatial distribution of the pixels dependent on the required image information. A fundamental image reclamation system can be broken down into two types:
● Text-based image retrieval (TBIR)
● Content-based image retrieval. (CBIR)
Images are identified based on annotations given in TBIR. However, TBIR is subject to other pitfalls, such as homonyms and orthographic mistakes. Therefore, using CBIR systems, drawbacks related to this technique can be resolved. The CBIR system extracts basic features such as color, texture, and form. from an image and forms a special feature vector. The entire database image set is then followed by the same procedure, and the functional vector is created again. In contrast, in a similarity process, the two shaped feature vectors are compared using a certain distance metric.
After performing a thorough literature review on picture encryption algorithms, the following issues have been identified to be of concern: one-dimensional chaotic maps with periodic window issues and the selection of control parameters. Image encryption with a limited degree of randomness is considered undesirable, and the development of key streams for picture encryption is considered unsuitable. Chaotic maps were developed as a solution to the problems identified in the literature. Chaotic maps were designed to prevent periodic window problems while also speeding up key generation. They were also designed to encrypt images block-by-block and enhance picture encryption with the smallest number of rounds possible. The challenges associated with periodic windows are overcome using a large number of chaotic maps simultaneously. The key streams are generated as a result of the construction of an internal key generator process that has been included in the overall system architecture, which allows for speedier encryption. The performance of block-wise encryption markedly outperforms that of stream cipher-based encryption. Block-wise encryption is performed by overlapping and nonoverlapping block division in the context of encryption, has a larger key space than other algorithms, and has a lower computational complexity than other algorithms. In this study, the discussion begins with a review of the literature, followed by the suggested MSA QCLM algorithms. The outcomes of the experimental setup, as well as a comparison analysis, were addressed, and the discussion finished with a discussion of the future scope.
In this paper, we propose a classifier that provides good performance when recovering CT images without pixel loss. A performance comparison with existing techniques is provided, and a study and evaluation of the suggested procedure are provided in BraTS 2018. The proposed classifier is shown to be more effective than other existing approaches. The remainder of this paper is organized accordingly. The related activities are described in Section 2, followed by Section 3, which describes the proposed optimized classifier framework method. Experimental results are reported in Section 4. Lastly, a discussion of the results is provided in section 5.
Baazaoui et al. [1] suggested the identification of pathological differences in CT images in the arm as a tool for uniformity estimation. Texture characteristics are extracted from these images using local binary pattern (LBP) invariant extended rotation. Campana and Keogh [2] considered the recruitment of brain MRI images to be dependent on local pictorial structure and proposed a region of interest (ROI). LBP and Canada-Lucas-Tomasi removed practical points, and Dubey et al. [3] proposed a midlevel descriptor-dependent content-centered image retrieval (IR) technique. These descriptors were automatically configured by identifying the semantic pated aspects based on existing clinical knowledge of the lower-level image property.
Kumar et al. [4] have shown that Zernike moments (ZMs) have lower-order output in the extraction of picture structure characteristics. Lavanya and Kannan [5] presented another dominant noise sensitivity approach with local management mask maximum border patterns to recover an image and recognize a face. They used eight directional masks in every size 3–3 in every 3 and 3 patterns of the image for eight maximum boring patterns (MEPP) and maximum border position patterns (MEPP). To support this multiresolution approach, Gaussian filters were also added. The 3D symmetrical local ternary spherical illustrations.
Lazaridis et al. [6] used biomedical image restoration based on texture features. Recovery of MRI and CT from other textured local, ternary co-occurrence patterns (LTCoP). Manickavasagam and Selvan [7] coded similar ternary edges in co-occurrence for each point (center) in eight directions using the first-order derivatives. This process produced marked improvements compared to LBP and LDP. The authors then considered local mesh patterns (LMeP) and local mesh peak valley edge patterns (LMePVEP) to markedly increase the average accuracy and the average rate of recovery of LMePVEP.
Murala and Wu [8] proposed a modern method for picture recovery using the support vector machine (SVM) classifier to screen out obsolete pictures. Muralaand Wu [9] proposed a new model termed multimodal searching and finding based on rich image content. Peng et al. [10] presented the ultimate process of research defined as quantization-based image and video retrieval. The hacking system for effective image retrieval is called a hypergraph with a hacking process. Rahman et al. [11] used bigenerative encoders, multimodal stochastic recurrent neural networks (RNNs) for effective image recovery, and hierarchic binary autoencoders. Song et al. [12] proposed a calculation of the transformed local bitplane values of the bitplane binary contents of each of the nearby pixels for each pixel.
Song et al. [13] proposed an automated segmentation algorithm and backpropagation network classification for lung lesion identification in photos of CT scans using fuzzy local knowledge cluster media (FLICM). Unay et al. [14] Automatic identification and recognition of lung nodules with an automated cuckoo scan algorithm in CT images. Vipparthi et al. [15] suggested a compression-based distance measure for texture. Janarthanan et al. [16] proposed a rational foundation both in the proposal and useless forms of reasoning to tackle the above problem. The two rules are identical in the second case, where one or more of the first rule proposals are transposed into the other with a modified sign. The two laws are identical. This property was used to establish the (adhoc) claim.
Janarthanan et al. [17] proposed syntax and semantics in the sense of fuzzy logic in the sense of changing/reorganizing individual laws into an acceptable type (e.g., by negating and transposing proposals from precedents to effect, and vice versa), to render (consisting) instantiated (refuted, negative) the proposals present in the preceding (consequent), to allow forward (backward) fire laws. For the forward/backward logic, the next ambiguous compositional law of comparison is used. Janarthanan et al. [18] proposed a wireless sensor network error detection layer. The issue of the time-differentiated nonlinear filters causes the environment to randomly alter nonlinearity. The application of dispersed white Bernoulli addresses quantification errors and packet drop out Type 2 T–S (Takagi â?? Sugeno) sequence and robotics-based method fuzzy by solving recurring linear matrix inequalities between sensor and neighbor sensors.
Janarthanan et al. [19] optimized input data for preprocessing using the reconstructed coder (UDR-RC), reducing the time required to process the data from the single sensor node, improving the recognition efficiency of selected features and extracts within the neural network for HAR mechanisms of operation. Hussain et al. [20] proposed a cost-effective solution for the diagnosis of image retrieval that uses a deep neural network to classify the image input according to the input image. That system requires less time to extract features, which requires more space to achieve this. Hatibaruah et al. [21] introduced the method to identify the similarity between CT images by comparing the similarity between the pixel patterns with their color and texture features. This system also considers the feature descriptor of the neighborhood patterns and improves the accuracy of image retrieval.
Mistry [22] used the Laplacian score to reduce the vector dimensions of the function and thus improves the retrieval accuracy in percentage when compared with existing systems. Retrieval efficiency is measured using various distance parameters. Kumar and Mohan [23] proposed the CBIR-based retrieval system, which uses the texture features of the image and extracts the similarity between the pixels to improve retrieval accuracy.
Raghuraman et al. [24] discussed the Krawtchouk method and its process for image retrieval. The process includes some interactive methods to select the region for enhancing the retrieval process by improving feature extraction. Sabena et al. [25] used the canopy method for image retrieval and increased accuracy with the K-means clustering algorithm. Vinodhini et al. [26] proposed image enhancement for retinal images to identify diabetic retinopathy by the dehazing method. Gupta et al. [27] proposed the local binary pattern (LBP) model to retrieve images based on the CBIR approach. The first step generates an image mask, and the next creates the pattern between the mask. Manickam et al. [28] developed the local directional extrema number pattern (LDIENP) feature for retrieving computed tomography image retrieval.
A major challenge to image retrieval is the creation of practical mappings that reduce the conceptual difference between large semantics definitions and low-level image visual features. However, high-level meanings are not explicitly derived from perceptual materials and describe more meaningful aspects of items and scenes in images that humans interpret. Such conceptual dimensions are similar to the needs and subjectivity of users. Other than ordinary low-level visual elements, new highlights should be achieved that are more delegated to depict the semantic significance of ideas to overcome the gap between various existing image retrieval approaches.
We describe the proposed pattern in detail in this section. The proposed method is enforced using the following measures. First, the reference image from the database is collected. This study tested the approach using datasets that are available to the public. First, we used the BraTSfile, which consists of 131 CT scanned images along with simple descriptions. The BraTS dataset also consists of 70 CT pictures, but there are no accompanying descriptions. Therefore, this study only considered 131 annotated CT images. All images are available in JPEG format and come with a 630/630 DICOM file and 24-bit bit depth. Each image is then preprocessed. The overall process of the image retrieval system is shown in Figure 1.
Preprocessing is completed first in the proposed framework. Foundation noise is removed, and salt-and-pepper noise, which is similar to Gaussian noise, are also present in CT images. Preprocessing is performed to erase these superfluous pixels. This process removes the mistake of distinguishing channels and histograms. These noises can be removed at this stage based on the CT picture. The versatile filter is commonly used to separate noise from an image or sign a nonlinear optical filter. Sound decrease is a common technique that improves effectiveness, and prehandling is performed to work on the difference in the CT picture. The accessibility of arbitrary commotion (e.g., Gaussian noise) is compelling with the Adaptive Weiner Filter (AWF). This strategy can effectively identify the most secure method to reestablish an effective sign. To manage photo placements effectively, we use AWF, which is typically used to streamline an image with fewer variations. Commonly, histograms are balanced to work on the consistency of the picture. Histogram adjustment is an automated interaction used to build the picture contrast. The primary awareness esteems are essentially expanded (i.e., the range of picture strength is broadened), which makes the less nearby correlation with further develop ties between districts. Thus, the normal picture contrast is expanded after histogram adjustment.
In medicine, health care is defined as the preservation of a healthy state to act in anticipation of sickness, identification of the nature of an illness by evaluation of the symptoms, and the treatment of various medical disorders. Health practitioners are responsible for providing health care, which may differ markedly across nations, groups, and people based on the variety of medical disorders[29,30,31]. It is a computer approach that allows for the protection of information, the preservation of secrecy, and the preservation of integrity. Radiologists[32] may remotely save photos on a cloud server[33,34], which can later be opened by a doctor using a patient ID, allowing them to collaborate more effectively. Because medical photos are sensitive information, it is critical to encrypt medical images before they are uploaded to the cloud for storage and retrieval. Given that retrieving encrypted images takes a long time, we use Elgamal Encryption to ensure that images may be retrieved quickly while still maintaining confidentiality. As a result, we use Rivest-Shamir-Adleman (RSA) encryption on the cloud because the cloud does not represent a trustworthy authority. The Schnorr Protocol is used in the proposed plan to enjoin static and dynamic questions to attain the desired levels of security, secrecy, and integrity. A static question is composed of conventional questions. A dynamic question is composed of questions that are regularly changing. As part of the proposed method, we use the Blocker Protocol to determine who (e.g., the doctor) is obtaining photos that have been updated by the radiologists. Performance is also examined in detail. Recently, searching through encrypted photos using searchable symmetric encryption [35,36] and attribute-based hybrid encryption has been proposed along with other techniques. However, retrieving images using such methods requires a lot of numerical computations. Conversely, adaptive attacks (e.g., Brute Force assault) can potentially retrieve images by guessing to access the system. However, the complexity of storing, searching and updating procedures is markedly increases when these techniques are used. To overcome these difficulties, we present the Elgamal Encryption and Rivest-Shamir-Adleman (RSA) Encryption schemes for use with medical cloud images in this study, which is based on an earlier study that has been expanded and improved upon.
We let p denote the normalized histogram for each possible intensity. Thus:
py=Numberofpixelswithyintensitytotalnumberofthepixels | (3.1) |
here y=0,1,…,y−1
The histogram-equalized image can be defined as:
Hi,j=Base((y−1)bi,j∑y=0py) | (3.2) |
where base represents the nearest integer, which is equivalent to transforming the pixel intensity:
∂N∂x(∫N0pN(x)dz)=∂N((N)(x−1)(N))d/dN | (3.3) |
Finally, the probability distributed uniformity function can be represented as ∂N∂x. While the result indicates that the equalization process used is exactly flat histograms, it can smooth and improve them.
Boundary identification is a picture line process. Edge identification is an essential advance in the comprehension of picture qualities. Edges contain huge elements and significant data. The resulting picture size is reduced markedly, requiring less material to be processed while still retaining the fundamental properties of a picture. Because borders show up in picture areas with object limits often, edge location is broadly used to divide pictures when the pictures are isolated into regions with different items. This process can be applied specifically to biomedical pictures. In the principal subsidiary of the picture, the slopes approach detects the edges, and the cutoff is negligible. By fitting thresholding, sharp edges can be isolated. In this study, the region of interest can be separated, and to estimate its aspect, the restrictions of an objective that be indicated on a picture or in a volume. This division approach considers adjoining pixels of the beginning seeds and decides if neighbors to the area should be added. Using the local instrument consolidates pixels or subregions into more extensive regions. Pixel conglomeration is the least difficult of these strategies, which begins with an assortment of "seed" focuses and from them develops districts when the following pixels with similar elements (e.g., surface and shading) are connected to each mark of the seed:
ROIsegmentation=S∑i,j∈Q2V2(Ki,Kj)Nlogci+γ∫cidx | (3.4) |
ROIsegmentation is the watershed separation; V2 is the velocity gradient; Ki and Kj are the low and high pixel values, respectively; N characterizes the numeral of pixel chunks in the image; logci embodies the spatial size of the image; γ denotes the frequency coefficient of an image; and ci is the distance of the pixels.
A gray level cooccurrence matrix (GLCM) was used to control the ethnicity recognition results using textural characteristics. The GLCM is a spatial dependency matrix that is known as the gray level. The frequency of pairs with specific values is calculated, and the input image occurs in specified spatial relationships. Despite these features, the proposed method analyzes a total of 22 texture features. A picture recovery program returns images from a broad dataset that are perceptively near the user when presented with a sample file. With the following texture dataset, we perform regular retrieval experiments with a variety of CT pictures of the brain. Approximately 111 texture classes are identified. To create these samples, each original image of texture is divided into nine subimages. The distances from the query to the other 998 pictures in the dataset are calculated for each question, and the first K images are obtained closest to each question. The efficiency of a recovery system is typically calculated using parametric terminology. The GLCM is an arithmetic device that can typically efficiently extract objects. Image accuracy can also be clarified, and the image can be extracted for the study. GLCM may determine the pixel frequency at a given accuracy. The only pixel to be considered in this study must be the pixel ϕ, and the adjacent values of m must be known as the pixels ϕ route l. Typically, m has a single meaning, and ϕ is directionally advantageous. Then, the obtained directional value can delete the image attributes used for segmentation. The GLCM protocol can be described as follows:
P(m,s)=G(m,s,o,ϕ)H∑m=1H∑s=1G(m,s,o,ϕ) | (3.5) |
G is the occurrence vector; m,s,o, commonly have pixel values of l, m,P, which are the characteristics of an image; (m,s) is a part of m; and l is a part of ϕ and is the standardized part. GLCM is used to obtain the various attributes, and some texture and color-based features are considered in this study.
This process determines the details of each pixel, which provides information about the artifacts for compacting the images:
Entropy=−H−1∑m=1H−1∑s=1nP(s,o)×log(P(s,o)) | (3.6) |
P(s,o) is considered the occurrence of the topographies P, and H is the fixed constant.
By summiting the high and low homogeneity of the image, the values achieved using GLCM can be calculated. The corner moment is high, while the accuracy is low. The images are usually regularly calculated:
Angularmoment=H−1∑m=1H−1∑s=1P(n,o)2 | (3.7) |
The quality of the image is evaluated using contrast. The values are examined between the areas to evaluate the differences:
Contrast=H−1∑s=0S2H−1∑s=1P(s,o) | (3.8) |
This moment is another parameter that is used to calculate the homogeneity of the image:
IDM=H−1∑m=1H−1∑s=111+(s−o)2P(s,o) | (3.9) |
This parameter helps to identify whether the return of the features with as many square parts is feasible:
E=H−1∑o=1H−1∑s=1P(s,o)3 | (3.10) |
The deviation on or after the mean is calculable straight away from the gray point:
VAR=H−1∑m=1H−1∑s=1P(s,o)2−ϕ2 | (3.11) |
This value helps to create the frequency connections between the pixels:
SAR=2H−1∑s=0nPx+y(s) | (3.12) |
For the first phase of classification, improved cuckoo search optimization (ICSO) is used to analyze anomaly characteristics. As a population-dependent algorithm, this algorithm is typically recommended to maximize the network parameter. There are guidelines for the ICSO: each cuckoo selects the nest arbitrarily and places one egg on each nest. With the highest egg quality, the next generation moves to the strongest nest. The host nest number is fixed, and the cuckoo egg may be calculated in conjunction with the probability of Pa[0,1] by the host bird. The host bird may kill them or abandon their nest if the eggs of the cuckoo are detected. ICSO is the primary method that is used to resolve the problem of network and image optimization parameters. Thus, the L{é}vy distribution is calculated as:
Levyclass(α)≈y=l−α | (3.13) |
The Lévy dissemination can be abridged by the subsequent equation:
ΘαLevyclass(α)≈X×(u|v1/3|)(xbest−xi) | (3.14) |
where k is the Lévy multiplication coefficient, and u and v are deducted from the normal dissemination curves:
Algorithm 1 ICSO optimization |
Input: Image features In_fea, Image_coordinate Vc Output: classified valued Cv To compute the trust value, For i=1:size(Vfea, 1) For j=1:size(Vfea, 1) Distance(i,j)=√(infea(i,1)−infea(i−1))+(infea(i,1)−infea(j,1))2 End Image features in_fea=⌊in_feaDistance⌋ To compute, trust value itv=(in_feadist) Class label=unique(target) K=length(class label) For i=1:k Temp=totalclassmean(I, :) W(I,j)=−0.5×Temp×totalclassmean+log(i) W(I,2:end)=Temp End |
Using LDA, the value of each image is calculated after the CT brain images are graded. The LDA is provided with the performance of the applied classifier. The quality of the data present in the image is calculated by this methodology. The image classification of CT images is critical. Typically, a data analysis tool called LDA is used to reduce the dimensionality of a large number of interrelated variables while keeping the greatest amount of relevant data. Linear discrimination (LDA) analysis is one of the image classification techniques. The application of matrix generation in image processing operators acting on images is shown. The LDA analysis process is described below. The first phase in linear discriminant analysis is based on LDA function sample training properties. The LDA has Cl classes (Cl>3) and assumes sa to be the set of La illustrations of class Wa in DS multidimensional space. For each class, the strew matrix will result between the class Lbc and within the class scatter matrix is Lwi_c, which are defined as follows:
Lwi_c=CL∑a=1La;Sa=1pa∑p∈pa(p−Qa)(p−Qa)T | (3.15) |
Lbc=CL∑a=1(Qa−Q)(Qa−Q)T | (3.16) |
dxd is the matrix A, which is used for dimensionality decrease to make d dimensional topographies C=ATx. The covariance and mean matrix of all illustrations is given by:
S=1n∑p∈P(p−m)(p−m)T | (3.17) |
The linear analysis of discrimination helps maximize class separation component axes. The value of the covariance matrix and its vectors are calculated to evaluate the appreciated qualities of its own. The smaller vector is extracted, and the score S is shown. The estimation of the similarity shall be calculated by contrasting the score value (SV) of the requested image to the shown image. The Euclidean distance is studied. The ED is shown in this study between an image that corresponds to the query image:
eDIST=√eDr+√eDg+√eDb | (3.18) |
where
eDr=(k(x+i,y+j,1)+(f(x,y,1)2)) |
eDg=(k(x+i,y+j,2)+(f(x,y,2)2)) |
eDb=(k((x+i,y+j,3)+(f(x,y,3)2))) |
Unless the difference between the labeled pixel and the nonidentified pixel is less than the threshold, then all pixels are identified as belonging to the same area:
objED=−20×q(−2×√∑SV)/2−exp(∑cos(2π×2V)/db)+20exp | (3.19) |
where ED means the Euclidean distance, q denotes the image query, and s is the score image as in the previous step. The last step of the system proposed is image recovery:
IR=det[P]−k(recovery(M))2 | (3.20) |
where IR is the image to be retrieved, P is the pointed image, and M is the classified image. These variables are declared as:
det[P]=M | (3.21) |
Recovery(P)=M | (3.22) |
The process of image retrieval was concluded as:
IR=M−V(M)2 | (3.23) |
where V is the empirical constant.
These equations allow a person to access the CT image in the brain from a broad database. In this step, the threshold value is T. Images with a score value are identified above this threshold value. Images with a lower T value are not considered.
Figure 2 shows the input BRATS CT brain image dataset.
Figure 3 shows a query of meningioma and its top 4 retrieved results.
Figure 4 shows a query of Glioma and its top 4 retrieved results.
Figure 5 shows a query of the pituitary tumor and its top 4 retrieved results.
To perform the experiments in this study, MATLAB is used. This section describes the performance of the proposed technique.
A few boundaries were estimated and broken down to evaluate the result. This investigation uses different performance measurement methods that assess the proposed system in contrast to the methods of existing studies. Some terms are used in this analysis: 1) true positive (TP), 2) true negative (TN), 3) false positive (FP), and 4) false negative (FN). TP describes images that are identified correctly. FP describes photos that have been incorrectly identified. TN describes abnormal images that have been properly classified, and FN describes images that are mistakenly branded anomalous.
Sensitivity is an output statistical indicator and is also a cataloging factor that is regarded as a TP limit. Sensitivity measures the percentage of regular pictures correctly identified in each case. Often, functionality is known as TN and tracks irregular images correctly:
Sensitivity=TPTP+FN | (4.1) |
Specificity, which is also termed the TP rate, measures the number of real negatives that are correctly identified:
Specificity=TNTN+FP | (4.2) |
Precision is an output parameter that yields the number of regular images that are marked. The number of acceptable cases listed in the entire range of categorized cases will be the remainder. The accuracy is calculated accurately with the image-to-image ratio:
Precision=TPTP+FP | (4.3) |
Accuracy is defined as the proportion of the number of images correctly obtained by the number of images accessible. The accuracy and reminder accuracy are shown in percentage terms:
Accuracy=TP+TNTP+TN+FP+FN | (4.4) |
The exactness of the particularity esteems produced by the proposed method appears to be different from those of the techniques shown in Figure 6. The consequences of the diagram show that contrasted with existing strategies, the proposed strategy achieves a higher particularity rate (98%).
Figure 7 shows the proposed framework, which achieves a higher responsiveness rate (98%) than the existing framework.
Figure 8 shows the proposed classification technique, showing a maximum accuracy of 98%, which is superior to customary strategies.
Figure 9 shows the results of the proposed technique, indicating improvement to the proposed strategy with a higher level of precision esteems (98%).
Figure 10 shows the precision (left) and the accuracy (right) image acquisition outcomes obtained using the process suggested and the state-of-the-art CK-1 compression system.
This study develops and analyzes the performance of a precise image retrieval scheme that uses a hybrid adaptive neuro-fuzzy optimized classifier system. The use of this filter in this study is described experimentally. Twelve characteristics are extracted during feature extraction, and the prototype of an adaptive neuro-fuzzy optimized classifier is used as a classifier to perform ranking. This method is designed using the formula to maximize the number of queries. This algorithm uses the intelligent behavior of the cuckoo and uses an optimization tool, offering a population-centered search method where the nest locations of the organism are altered over time by the cuckoo. The cuckoo aims to locate the locations of nest origin. This optimized classifier has beneficial results regarding specificity, sensitivity, precision, and recall. The use of LDA to calculate the scores also produced better results. All of these results demonstrate the highly efficient recruitment of CT images by the proposed system. Within a practical therapeutic setting, this research will be expanded upon, and the already derived properties will be more finely balanced.
The authors declare there is no conflict of interest.
[1] |
A. Baazaoui, W. Barhoumi, A. Ahmed, E. Zagrouba, Modeling clinician medical-knowledge in terms of med-level features for semantic content-based mammogram retrieval, Expert Syst. Appl., 94 (2018), 11–20. https://doi.org/10.1016/j.eswa.2017.10.034 doi: 10.1016/j.eswa.2017.10.034
![]() |
[2] |
B. J. Campana, E. J. Keogh, A compression‐based distance measure for texture, Stat. Anal. Data Min., 3 (2010), 381–398. https://doi.org/10.1002/sam.10093 doi: 10.1002/sam.10093
![]() |
[3] |
S. R. Dubey, S. K. Singh, S. K. Singh, Local bit-plane decoded pattern: a novel feature descriptor for biomedical image retrieval, IEEE J. Biomed. Health. Inf., 20 (2015), 1139–1147. https://doi.org/10.1109/JBHI.2015.2437396 doi: 10.1109/JBHI.2015.2437396
![]() |
[4] |
Y. Kumar, A. Aggarwal, S. Tiwari, K. Singh, An efficient and robust approach for biomedical image retrieval using Zernike moments, Biomed. Signal Process. Control, 39 (2018), 459–473. https://doi.org/10.1016/j.bspc.2017.08.018 doi: 10.1016/j.bspc.2017.08.018
![]() |
[5] |
M. Lavanya, P. M. Kannan, Lung lesion detection in CT scan images using the fuzzy local information cluster means (FLICM) automatic segmentation algorithm and back propagation network classification, Asian Pac. J. Cancer Prev., 18 (2017), 3395–3399. https://doi.org/10.22034/APJCP.2017.18.12.3395 doi: 10.22034/APJCP.2017.18.12.3395
![]() |
[6] |
M. Lazaridis, A. Axenopoulos, D. Rafailidis, P. Daras, Multimedia search and retrieval using multimodal annotation propagation and indexing techniques, Signal Process. Image Commun., 28 (2013), 351–367. https://doi.org/10.1016/j.image.2012.04.001 doi: 10.1016/j.image.2012.04.001
![]() |
[7] |
R. Manickavasagam, S. Selvan, Automatic detection and classification of lung nodules in CT image using optimized neuro fuzzy classifier with cuckoo search algorithm, J. Med. Syst., 43 (2019), 1–9. https://doi.org/10.1007/s10916-019-1177-9 doi: 10.1007/s10916-019-1177-9
![]() |
[8] |
S. Murala, Q. J. Wu, Local ternary co-occurrence patterns: a new feature descriptor for MRI and CT image retrieval, Neurocomputing, 119 (2013), 399–412. https://doi.org/10.1016/j.neucom.2013.03.018 doi: 10.1016/j.neucom.2013.03.018
![]() |
[9] |
S. Murala, Q. J. Wu, Spherical symmetric 3D local ternary patterns for natural, texture and biomedical image indexing and retrieval, Neurocomputing, 149 (2015), 1502–1514. https://doi.org/10.1016/j.neucom.2014.08.042 doi: 10.1016/j.neucom.2014.08.042
![]() |
[10] |
S. H. Peng, D. H. Kim, S. L. Lee, M. K. Lim, Texture feature extraction based on a uniformity estimation method for local brightness and structure in chest CT images, Comput. Biol. Med., 40 (2010), 931–942. https://doi.org/10.1016/j.compbiomed.2010.10.005 doi: 10.1016/j.compbiomed.2010.10.005
![]() |
[11] |
M. M. Rahman, S. K. Antani, G. R. Thoma, A learning-based similarity fusion and filtering approach for biomedical image retrieval using SVM classification and relevance feedback, IEEE Trans. Inf. Technol. Biomed., 15 (2011), 640–646. https://doi.org/10.1109/TITB.2011.2151258 doi: 10.1109/TITB.2011.2151258
![]() |
[12] |
J. Song, Y. Guo, L. Gao, X. Li, A. Hanjalic, H. T. Shen, From deterministic to generative: Multimodal stochastic RNNs for video captioning, IEEE Trans. Neural Networks Learn. Syst., 30 (2018), 3047–3058. https://doi.org/10.1109/TNNLS.2018.2851077 doi: 10.1109/TNNLS.2018.2851077
![]() |
[13] |
J. Song, H. Zhang, X. Li, L. Gao, M. Wang, R. Hong, Self-supervised video hashing with hierarchical binary auto-encoder, IEEE Trans. Image Process., 27 (2018), 3210–3221. https://doi.org/10.1109/TIP.2018.2814344 doi: 10.1109/TIP.2018.2814344
![]() |
[14] |
D. Unay, A. Ekin, R. S. Jasinschi, Local structure-based region-of-interest retrieval in brain MR images, IEEE Trans. Inf. Technol. Biomed., 14 (2010), 897–903. https://doi.org/10.1109/TITB.2009.2038152 doi: 10.1109/TITB.2009.2038152
![]() |
[15] |
S. K. Vipparthi, S. Murala, A. B. Gonde, Q. J. Wu, Local directional mask maximum edge patterns for image retrieval and face recognition, IET Comput. Vision, 10 (2016), 182–192. https://doi.org/10.1049/iet-cvi.2015.0035 doi: 10.1049/iet-cvi.2015.0035
![]() |
[16] | R. Janarthanan, A. Chakraborty, A. Konar, A. K. Nagar, Ad hoc reasoning in chained fuzzy systems realized with Diens-Rescher implication, in 2013 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 1 (2013), 1–6. https://doi.org/10.1109/FUZZ-IEEE.2013.6622561 |
[17] |
R. Janarthanan, A. Konar, A. Chakraborty, Propositional syntax and semantics induced knowledge re-structuring in a fuzzy logic network for ad hoc reasoning, Int. J. Approximate Reasoning, 82 (2017), 138–160. https://doi.org/10.1016/j.ijar.2016.12.009 doi: 10.1016/j.ijar.2016.12.009
![]() |
[18] |
R. Janarthanan, S. Doss, R. Balamurali, Robotic-based nonlinear device fault detection with sensor fault and limited capacity for communication, J. Ambient Intell. Hum. Comput., 11 (2020), 6373–6385. https://doi.org/10.1007/s12652-020-01946-8 doi: 10.1007/s12652-020-01946-8
![]() |
[19] |
R. Janarthanan, S. Doss, S. Baskar, Optimized unsupervised Deep learning assisted reconstructed coder in the on-nodule wearable sensor for Human Activity Recognition, Neurocomputing, 164 (2020), 1–10. https://doi.org/10.1016/j.measurement.2020.108050 doi: 10.1016/j.measurement.2020.108050
![]() |
[20] |
C. A. Hussain, D. V. Rao, S. A. Mastani, RetrieveNet: a novel deep network for medical image retrieval, Evol. Intell., 14 (2020), 1449–1458. https://doi.org/10.1007/s12065-020-00401-z doi: 10.1007/s12065-020-00401-z
![]() |
[21] |
R. Hatibaruah, V. K. Nath, D. Hazarika, Local bit plane adjacent neighborhood dissimilarity pattern for medical CT image retrieval, Procedia Comput. Sci., 165 (2019), 83–89. https://doi.org/10.1016/j.procs.2020.01.073 doi: 10.1016/j.procs.2020.01.073
![]() |
[22] | Y. D. Mistry. Textural and color descriptor fusion for efficient content-based image retrieval algorithm, Iran J. Comput. Sci., 3 (2020), 169–183. https://doi.org/10.1007/s42044-020-00056-0 |
[23] |
G. S. Kumar, P. K. Mohan, Local mean differential excitation pattern for content based image retrieval, SN Appl. Sci., 1 (2019), 1–10. https://doi.org/10.1007/S42452-018-0047-2 doi: 10.1007/S42452-018-0047-2
![]() |
[24] |
G. Raghuraman, J. P. Ananth, K. L. Shunmuganathan, L. Sairamesh, Local structure-based region-of-interest retrieval in brain MR images, J. Comput. Theor. Nanosci., 12 (2015), 5562–5565. https://doi.org/10.1166/jctn.2015.4684 doi: 10.1166/jctn.2015.4684
![]() |
[25] | S. Sabena, P. Yogesh, L. SaiRamesh, Image retrieval using canopy and improved K mean clustering, in International conference on emerging technology trends, 1 (2011), 15–19. |
[26] | C. A. Vinodhini, S. Sabena, L. S. Ramesh, A Robust and Fast Fundus Image Enhancement by Dehazing, in International Conference On Computational Vision and Bio Inspired Computing, 1 (2018), 1111–1119. https://doi.org/10.1007/978-3-030-41862-5_113 |
[27] |
S. Gupta, P. P. Roy, D. P. Dogra, B. Kim, Retrieval of colour and texture images using local directional peak valley binary pattern, Pattern Anal. Appl., 23 (2020), 1569–1585. https://doi.org/10.1007/s10044-020-00879-4 doi: 10.1007/s10044-020-00879-4
![]() |
[28] |
A. Manickam, R. Soundrapandiyan, S. C. Satapathy, R. D. J. Samuel, S. Krishnamoorthy, U. Kiruthika, et al., Local directional extrema number pattern: A new feature descriptor for computed tomography image retrieval, Arabian J. Sci. Eng., 1 (2021), 1–23. https://doi.org/10.1007/s13369-021-06024-5 doi: 10.1007/s13369-021-06024-5
![]() |
[29] |
S. Basu, M. Karuppiah, M. Nasipuri, A. Halder, N. Radhakrishnan, Bio-inspired cryptosystem with DNA cryptography and neural networks, J. Syst. Archit., 94 (2019), 24–31. https://doi.org/10.1016/j.sysarc.2019.02.005 doi: 10.1016/j.sysarc.2019.02.005
![]() |
[30] |
R. Selvanambi, J. Natarajan, M. Karuppiah, S. H. Islam, M. Hassan, G. Fortino, Lung cancer prediction using higher-order recurrent neural network based on glowworm swarm optimization, Neural Comput. Appl., 32 (2020), 4373–4386. https://doi.org/10.1007/s00521-018-3824-3 doi: 10.1007/s00521-018-3824-3
![]() |
[31] |
S. Basu, M. Karuppiah, K. Selvakumar, K. C. Li, S. H. Islam, M. M. Hassan, et al., An intelligent/cognitive model of task scheduling for IoT applications in cloud computing environment, Future Gener. Comput. Syst., 88 (2018), 254–261. https://doi.org/10.1016/j.future.2018.05.056 doi: 10.1016/j.future.2018.05.056
![]() |
[32] |
R. Elakkiya, P. Vijayakumar, M. Karuppiah, COVID_SCREENET: COVID-19 screening in chest radiography images using deep transfer stacking, Inf. Syst. Front., 23 (2021), 1369–1383. https://doi.org/10.1007/s10796-021-10123-x doi: 10.1007/s10796-021-10123-x
![]() |
[33] |
F. Wu, X. Li, L. Xu, S. Kumari, M. Karuppiah, J. Shen, A lightweight and privacy-preserving mutual authentication scheme for wearable devices assisted by cloud server, Comput. Electri. Eng., 63 (2017), 168–181. https://doi.org/10.1016/j.compeleceng.2017.04.012 doi: 10.1016/j.compeleceng.2017.04.012
![]() |
[34] |
S. Kumari, M. Karuppiah, A. K. Das, X. Li, F. Wu, N. Kumar, A secure authentication scheme based on elliptic curve cryptography for IoT and cloud servers, J. Supercomput., 74 (2018), 6428–6453. https://doi.org/10.1007/s11227-017-2048-0 doi: 10.1007/s11227-017-2048-0
![]() |
[35] |
S. Basu, M. Karuppiah, S. Rajkumar, R. Niranchana, Modification of AES using genetic algorithms for high-definition image encryption, Int. J. Intell. Syst. Technol. Appl., 17 (2018), 452–466. https://doi.org/10.1504/IJISTA.2018.095106 doi: 10.1504/IJISTA.2018.095106
![]() |
[36] |
A. R. Sanjay, R. Soundrapandiyan, M. Karuppiah, R. Ganapathy, CT and MRI image fusion based on discrete wavelet transform and Type-2 fuzzy logic, Int. J. Intell. Eng. Syst., 10 (2017), 355–362. https://doi.org/10.22266/ijies2017.0630.40 doi: 10.22266/ijies2017.0630.40
![]() |
1. | Basem Assiri, Mohammad Alamgir Hossain, Face emotion recognition based on infrared thermal imagery by applying machine learning and parallelism, 2022, 20, 1551-0018, 913, 10.3934/mbe.2023042 | |
2. | Mohammed Alhameed, Fathe Jeribi, Bushra Mohamed Elamin Elnaim, Mohammad Alamgir Hossain, Mohammed Eltahir Abdelhag, Pandemic disease detection through wireless communication using infrared image based on deep learning, 2022, 20, 1551-0018, 1083, 10.3934/mbe.2023050 | |
3. | Jinyun Jiang, Jianchen Cai, Qile Zhang, Kun Lan, Xiaoliang Jiang, Jun Wu, Group theoretic particle swarm optimization for gray-level medical image enhancement, 2023, 20, 1551-0018, 10479, 10.3934/mbe.2023462 | |
4. | Pradnya Borkar, Vishal Ashok Wankhede, Deepak T. Mane, Suresh Limkar, J. V. N. Ramesh, Samir N. Ajani, RETRACTED ARTICLE: Deep learning and image processing-based early detection of Alzheimer disease in cognitively normal individuals, 2023, 1432-7643, 10.1007/s00500-023-08615-w | |
5. | Mohammed Alhameed, Mohammad Alamgir Hossain, 2023, Rapid Detection of Pilgrims Whereabouts During Hajj and Umrah by Wireless Communication Framework : An application AI and Deep Learning, 978-1-6654-7524-2, 1, 10.1109/ESCI56872.2023.10099969 | |
6. | Mohammed Hassan Osman Abdalraheem, Mohammad Alamgir Hossain, Alfadil Ahmed Hamdan, M Tahar Kechadi, Suresh Limkar, 2023, Estimation of Facial Emotion Based on Landmark Points by Applying Artificial Intelligence and Machine Learning, 979-8-3503-0426-8, 1, 10.1109/ICCUBEA58933.2023.10392279 | |
7. | Abdoh Jabbari, 2023, Tracking and Analysis of Pilgrims' Movement Throughout Umrah and Hajj Applying Artificial Intelligence and Machine Learning, 979-8-3503-0426-8, 1, 10.1109/ICCUBEA58933.2023.10392217 | |
8. | Ali Mohammed Hendi, Mohammad Alamgir Hossain, Naif Ali Majrashi, Suresh Limkar, Bushra Mohamed Elamin, Mehebubar Rahman, Adaptive Method for Exploring Deep Learning Techniques for Subtyping and Prediction of Liver Disease, 2024, 14, 2076-3417, 1488, 10.3390/app14041488 | |
9. | Mohammad Alamgir Hossain, Mohammed Hassan Osman, Alfadil Ahmed Hamdan, Mohammed Eltahir Abdelhag, M Tahar Kechadi, 2023, FERLP: Facial Emotion Recognition Based on Landmark Points using Artificial Intelligence and Machine Learning, 979-8-3503-3509-5, 1, 10.1109/ICCCNT56998.2023.10308392 | |
10. | Hassan Abu Eishah, Mohammad Haseebuddin, Raj Kumar Masih, Yasir Ahmad, Mohammad Khamruddin, Mohammad Alamgir Hossain, 2024, Chapter 18, 978-981-97-6580-5, 217, 10.1007/978-981-97-6581-2_18 | |
11. | Suresh Limkar, Mohammad Alamgir Hossain, Sherif Tawfik Amin, Yasir Ahmad, 2025, 9781394256044, 185, 10.1002/9781394256075.ch10 | |
12. | Mohammad Alamgir Hossain, Fazal Imam Shahi, Sadia Husain, Abdelrazig Suliman, Mohammad Shahid Kamal, Yasir Ahmad, Mohammad Mazedul Huq Talukdar, R. John Martin, 2025, IALS: Innovative Approach for Lung Segmentation Applying Artificial Intelligence and Deep Learning, 979-8-3315-1389-4, 1, 10.1109/ICCECE61355.2025.10940152 |
Algorithm 1 ICSO optimization |
Input: Image features In_fea, Image_coordinate Vc Output: classified valued Cv To compute the trust value, For i=1:size(Vfea, 1) For j=1:size(Vfea, 1) Distance(i,j)=√(infea(i,1)−infea(i−1))+(infea(i,1)−infea(j,1))2 End Image features in_fea=⌊in_feaDistance⌋ To compute, trust value itv=(in_feadist) Class label=unique(target) K=length(class label) For i=1:k Temp=totalclassmean(I, :) W(I,j)=−0.5×Temp×totalclassmean+log(i) W(I,2:end)=Temp End |
Algorithm 1 ICSO optimization |
Input: Image features In_fea, Image_coordinate Vc Output: classified valued Cv To compute the trust value, For i=1:size(Vfea, 1) For j=1:size(Vfea, 1) Distance(i,j)=√(infea(i,1)−infea(i−1))+(infea(i,1)−infea(j,1))2 End Image features in_fea=⌊in_feaDistance⌋ To compute, trust value itv=(in_feadist) Class label=unique(target) K=length(class label) For i=1:k Temp=totalclassmean(I, :) W(I,j)=−0.5×Temp×totalclassmean+log(i) W(I,2:end)=Temp End |