
Cancer is a manifestation of disorders caused by the changes in the body's cells that go far beyond healthy development as well as stabilization. Breast cancer is a common disease. According to the stats given by the World Health Organization (WHO), 7.8 million women are diagnosed with breast cancer. Breast cancer is the name of the malignant tumor which is normally developed by the cells in the breast. Machine learning (ML) approaches, on the other hand, provide a variety of probabilistic and statistical ways for intelligent systems to learn from prior experiences to recognize patterns in a dataset that can be used, in the future, for decision making. This endeavor aims to build a deep learning-based model for the prediction of breast cancer with a better accuracy. A novel deep extreme gradient descent optimization (DEGDO) has been developed for the breast cancer detection. The proposed model consists of two stages of training and validation. The training phase, in turn, consists of three major layers data acquisition layer, preprocessing layer, and application layer. The data acquisition layer takes the data and passes it to preprocessing layer. In the preprocessing layer, noise and missing values are converted to the normalized which is then fed to the application layer. In application layer, the model is trained with a deep extreme gradient descent optimization technique. The trained model is stored on the server. In the validation phase, it is imported to process the actual data to diagnose. This study has used Wisconsin Breast Cancer Diagnostic dataset to train and test the model. The results obtained by the proposed model outperform many other approaches by attaining 98.73 % accuracy, 99.60% specificity, 99.43% sensitivity, and 99.48% precision.
Citation: Muhammad Bilal Shoaib Khan, Atta-ur-Rahman, Muhammad Saqib Nawaz, Rashad Ahmed, Muhammad Adnan Khan, Amir Mosavi. Intelligent breast cancer diagnostic system empowered by deep extreme gradient descent optimization[J]. Mathematical Biosciences and Engineering, 2022, 19(8): 7978-8002. doi: 10.3934/mbe.2022373
[1] | Chuanda Cai, Changgen Peng, Jin Niu, Weijie Tan, Hanlin Tang . Low distortion reversible database watermarking based on hybrid intelligent algorithm. Mathematical Biosciences and Engineering, 2023, 20(12): 21315-21336. doi: 10.3934/mbe.2023943 |
[2] | Hongyan Xu . Digital media zero watermark copyright protection algorithm based on embedded intelligent edge computing detection. Mathematical Biosciences and Engineering, 2021, 18(5): 6771-6789. doi: 10.3934/mbe.2021336 |
[3] | Qichao Ying, Jingzhi Lin, Zhenxing Qian, Haisheng Xu, Xinpeng Zhang . Robust digital watermarking for color images in combined DFT and DT-CWT domains. Mathematical Biosciences and Engineering, 2019, 16(5): 4788-4801. doi: 10.3934/mbe.2019241 |
[4] | Shanqing Zhang, Xiaoyun Guo, Xianghua Xu, Li Li, Chin-Chen Chang . A video watermark algorithm based on tensor decomposition. Mathematical Biosciences and Engineering, 2019, 16(5): 3435-3449. doi: 10.3934/mbe.2019172 |
[5] | Wenfa Qi, Wei Guo, Tong Zhang, Yuxin Liu, Zongming Guo, Xifeng Fang . Robust authentication for paper-based text documents based on text watermarking technology. Mathematical Biosciences and Engineering, 2019, 16(4): 2233-2249. doi: 10.3934/mbe.2019110 |
[6] | Yan Li, Junwei Wang, Shuangkui Ge, Xiangyang Luo, Bo Wang . A reversible database watermarking method with low distortion. Mathematical Biosciences and Engineering, 2019, 16(5): 4053-4068. doi: 10.3934/mbe.2019200 |
[7] | Qiuling Wu, Dandan Huang, Jiangchun Wei, Wenhui Chen . Adaptive and blind audio watermarking algorithm based on dither modulation and butterfly optimization algorithm. Mathematical Biosciences and Engineering, 2023, 20(6): 11482-11501. doi: 10.3934/mbe.2023509 |
[8] | Kaimeng Chen, Chin-Chen Chang . High-capacity reversible data hiding in encrypted images based on two-phase histogram shifting. Mathematical Biosciences and Engineering, 2019, 16(5): 3947-3964. doi: 10.3934/mbe.2019195 |
[9] | Haoyu Lu, Daofu Gong, Fenlin Liu, Hui Liu, Jinghua Qu . A batch copyright scheme for digital image based on deep neural network. Mathematical Biosciences and Engineering, 2019, 16(5): 6121-6133. doi: 10.3934/mbe.2019306 |
[10] | Guodong Ye, Huishan Wu, Kaixin Jiao, Duan Mei . Asymmetric image encryption scheme based on the Quantum logistic map and cyclic modulo diffusion. Mathematical Biosciences and Engineering, 2021, 18(5): 5427-5448. doi: 10.3934/mbe.2021275 |
Cancer is a manifestation of disorders caused by the changes in the body's cells that go far beyond healthy development as well as stabilization. Breast cancer is a common disease. According to the stats given by the World Health Organization (WHO), 7.8 million women are diagnosed with breast cancer. Breast cancer is the name of the malignant tumor which is normally developed by the cells in the breast. Machine learning (ML) approaches, on the other hand, provide a variety of probabilistic and statistical ways for intelligent systems to learn from prior experiences to recognize patterns in a dataset that can be used, in the future, for decision making. This endeavor aims to build a deep learning-based model for the prediction of breast cancer with a better accuracy. A novel deep extreme gradient descent optimization (DEGDO) has been developed for the breast cancer detection. The proposed model consists of two stages of training and validation. The training phase, in turn, consists of three major layers data acquisition layer, preprocessing layer, and application layer. The data acquisition layer takes the data and passes it to preprocessing layer. In the preprocessing layer, noise and missing values are converted to the normalized which is then fed to the application layer. In application layer, the model is trained with a deep extreme gradient descent optimization technique. The trained model is stored on the server. In the validation phase, it is imported to process the actual data to diagnose. This study has used Wisconsin Breast Cancer Diagnostic dataset to train and test the model. The results obtained by the proposed model outperform many other approaches by attaining 98.73 % accuracy, 99.60% specificity, 99.43% sensitivity, and 99.48% precision.
Reversible image watermarking technology requires that the embedding of the watermark information into the original images should be conducted on the premise of ensuring the visual quality of the images; meanwhile the original images can be restored nondestructively after extracting the watermark. Compared with the digital image watermarking [1], reversible image watermarking has higher requirements on watermark embedding, which also makes it more widely studied and applied in military, medical, and other fields with high image authenticity and integrity. One of the fundamental goals of watermarking is to increase the embedding rate while reducing the certain visual quality of the image.
Tian [2] proposed a reversible watermarking technology with high embedding rate, which used the difference expansion (DE) to embed watermark based on adjacent pixel pairs. This algorithm has attracted great attentions. Consequently, a number of improved embedding algorithms have been developed based on the DE.
However, using the DE method, an overflow will be generated, which will degrade the watermarking embedding quality. Therefore, removing the overflow location map is greatly valuable to enhance the performance of the algorithms. A reversible watermarking algorithm that combined the DE and the reversible contrast map was proposed in [3]. In this algorithm, the contrast pixel pair was mainly used to embed a small amount of auxiliary information instead of the location map, and the embedding capacity was greatly enhanced. However, the deterioration in image quality was more serious. Wang et al. [4] proposed a reversible watermarking method based on difference histogram. It was more unique in terms of overflow processing, but it still needed to embed a compressed positioning map. A reversible image watermarking scheme based on the DE effectively tackled the issues of overflow and underflow encountered during embedding in [5]. A quadratic DE based reversible technique was developed by Zhang et al. [6] where the DE was applied twice to improve the security and embedding capacity while ensuring the quality. The location map was not generated, which further allowed hiding large watermark.
The reversible image watermarking algorithm combining the DE with the least significant bit (LSB) was proposed in [7]. The algorithm realized greater embedding capacity while maintaining better imperceptibility. By combining histogram transform and prediction DE, Lin et al. [8] put forward a reversible watermarking technology, which effectively expanded the embedding capacity and achieved promising results.
Muhammad et al. [9] introduced a reversible image watermarking algorithm based on Genetic Programming (GP). Firstly, the algorithm resolved the image pixels, then executed integer wavelet transform (IWT) on the adjusted image, and blocked the sub-band coefficients without overlapping. The GP algorithm was implemented to achieve the optimal load for coefficient compression, and subsequently, the LSB was utilized to embed the watermark into the compressed coefficients. The algorithm guaranteed that the image embedded with the watermark had better visibility and higher payload. A reversible image watermarking algorithm based on local prediction was put forward by Vinoth et al. [10]. This algorithm divided image into many blocks with no superposition. Pixels on the edge of the block were detected by Median Edge Detection predictor (MED), and other pixels were predicted by local prediction. It had better visual quality and lower embedding rate. Jia et al. [11] proposed a reversible data hiding algorithm to lessen an invalid pixel shift in histogram shift. Based on image texture, the algorithm reduced the invalid shift of pixels in the histogram shift. The algorithm had distinguished embedding capacity and visual quality.
Literature [12] proposed an enhanced reduced DE based reversible image watermarking aiming to reduce the loss of image quality during embedding. Likewise, Yu et al. [13] also proposed a reversible watermarking algorithm based on multidimensional prediction error expansion, which can reduce the embedding distortion by discarding the embedding map that may produce high distortion. The algorithm was more suitable for images with smooth and simple textures. In addition, a reversible watermarking based on double-layer difference expansion was proposed in [14]. In this algorithm, smooth pixel pairs were preferred to embed watermark, and the distortion can be well controlled. However, the embedding rate was not high.
Based on the analysis of the traditional DE watermarking, an improved large-capacity reversible image watermarking algorithm based on the DE is proposed in this paper, which can effectively improve the embedding capacity and reduce the image distortion. It provides an effective solution to avoiding pixel overflow.
The reversible image watermarking studied mainly uses multi-scale decomposition, interpolation expansion, generalized difference expansion (GDE) and overflow processing to realize the embedding and extraction of watermarking algorithm.
Compared to the adjacent pixel expansion algorithm presented by Tian [2], generalized difference expansion algorithm makes more fully use of the redundant information between adjacent pixels. It picks up a plurality of adjacent pixels for processing, and which can be used to embed more watermark information. This paper uses this method to embed information into the selected original image pixel blocks. Suppose X = (x0, x1, x2, x3, …, xn-1) is a set of pixel values, the direct transform of the generalized integer transform is:
ˉx=⌊n−1∑i=0aixi/n−1∑i=0ai⌋d1=x1−x0d2=x2−x0⋮dn−1=xn−1−x0 | (1) |
For a group of pixel interpolations d1,d2, …, dn−1, Eq (2) can be used separately to hide 1 bit watermark information b:
d′i=2×di+b | (2) |
where d'i is the pixel difference after embedding the watermark. It requires the watermark embedding process not to cause the overflow of image pixel values. The corresponding inverse transform is:
x′0=ˉx−⌊n−1∑i=1d′i/n−1∑i=0ai⌋.x′1=x′0+d′1x′2=x′0+d′2⋮x′n−1=x′0+d′n−1 | (3) |
A set of pixel values X'=(x'0,x'1,x'2,x3,…,x'n−1) is generated by the generalized difference expansion algorithm, in which the mean of the group pixels is:
¯x'=⌊n−1∑i=0aix'i/n−1∑i=0ai⌋ | (4) |
After deduction and calculation, −x'−−x=0. Visibly a group of pixels have the same group mean value after the generalized difference expansion transform. After using the generalized difference expansion algorithm to embed information into the pixel blocks, the mean value of the block pixels should be unchanged, and that is a fairly strict requirement.
An image F is divided into multiple scales from a whole to parts, thus obtaining sub-image blocks Fi, j at different scales, where i = 1, 2, …, d is denoted as the segmentation scale; j = 1, … 4i-1 is denoted as the number of sub-blocks at each scale. Due to the limited size of the image, the segmentation scale is also limited.
The segmentation method is displayed in Figure 1. From left to right there is segmentation from the 1st to the 3rd scales, respectively. The whole image is the sub-blocks F1, 1 obtained after being decomposed at the 1st scale, and then the whole image is divided into four sub-blocks F2, 1, F2, 2, F2, 3 and F2, 4; each sub-block is called a second scale sub-block, and then they are divided until they reach the maximum scale set.
As the traditional multi-scale decomposition also belongs to fixed-sized block decomposition method, this paper makes an improvement of the multi-scale decomposition.
The improved image multi-scale decomposition is to divide a rectangular image into 4 equal-sized square blocks, and then determine whether or not these four square blocks meet the homogeneity criterion. If it is satisfied, the current block maintains unchanged; otherwise it continues to be decomposed into four square blocks, and determines whether or not they meet the criterion, until all the blocks meet the given criterion. The decomposition criterion can be expressed as:
|pi−pave|>(gl−1)×γ | (5) |
In Formula (5), Pi and Pave are the gray value of any pixel and the average gray value of all pixels in a square block, respectively; gl is the gray level of a pixel; γ is a decimal in the range of [0, 1]. The criterion is, when the guidelines when the absolute value of the difference between the gray value of either pixel and the average gray value of all pixels in the square block is greater than (gl−1)×γ, the block needs to be further divided, as shown in Figures 2 and 3.
As per the block division method, it divides images into unfixed sizes. Also from the perspective of the division results, the image blocks decomposed have pixels with high homogeneity, which are suitable for lossless watermark embedding. As specified in the algorithm, the minimum block size is 4×4.
After multi-scale decomposition, the size of each image block is 22+n×22+n, n∈{0,1,2…,7}. The size information of each block can be converted into binary form. Each of the decomposed sub-blocks is coded by the sub-block size, as shown in Table 1.
Size / pixels | code | Size / pixels | code |
4×4 | 000 | 64×64 | 100 |
8×8 | 001 | 128×128 | 101 |
16×16 | 010 | 256×256 | 110 |
32×32 | 011 | 512×512 | 111 |
After multi-scale decomposition, the image sub-blocks obtained are sorted (from top to bottom, from left to right), and the scale information of each sub-block is recorded sequentially as per the ordering result, thereby constituting the decomposition information q of the original image. Given there are a small number of large-sized blocks decomposed, Huffman encoding [15] can be used to further reduce the length of the image decomposition information, denoted as Huf (q).
For security of the algorithm, the length and encode table of the parameter Huf (q) are sent to the recipient in form of a secret key.
The GDE algorithm may produce an overflow after embedding watermark. The overflow processing will have a significant impact on the watermark embedding capacity. In this paper, the overflow points generated by embedding the watermark into the original image are labeled in a location map having the same image size. The generated location map is a binary image marked by 0 or 1 for each pixel. At the same time, for the overflow point, the original pixel value is replaced by the generated value by using the GDE. The difference between the generated value and the maximum pixel value of the image is taken into consideration by the inverse DE to calculate the new pixel value.
The 2 × 2 image block contains 4 pixel values (a, b, c, d). After embedding the watermark through the GDE, it becomes 4 new pixel values (A, B, C, D). If the generated second pixel overflows, it will be overflowed to make its gray value within the range of 0–255 in order to complete the watermark embedding. The specific adjustment is carried out as follows:
If the generated pixel value B is positive overflow, the transformed pixel value is 255−|B - 255|.
If the generated pixel value B is negative overflow, the transformed pixel value is 0 + |B - 0|.
For instance, if the pixel value generated by the DE is 266, the new image pixel value will be 244. The transform is detailed as follows:
266–255=11 |
255–11=244 |
The original image is divided into sub-blocks with a size of 2 × 2 pixels. As shown in Figure 4, if the pixel values in one of the sub-blocks are (244,254,250,248), the amount of the watermark embedded by the GDE is 3 bits. Suppose that the embedded watermark is (1, 1, 1), then, the pixel values of the generated image sub-block are (239,260,252,248) by using a generalized difference transform processing. As a result, the value of the second pixel in the image sub-block will overflow and the overflow processing is needed to better embed the watermark.
Through performing inverse difference transform, the pixel values of the image sub-block are (239,250,252,248). After embedding the watermark information (1, 1, 1) into the transformed sub-block by using the generalized difference transform again, the pixel values of the new sub-block are (230,253,257,249). The sub-block overflows by using the GDE, and the pixel values of the generated sub-block are (230,253,253,249) by performing inverse transformation.
After overflow processing, the watermark can be embedded into the generated overflow pixels by using the GDE. The overflow point generated by the last DE is expressed by using overflow location. Keeping certain visual quality, the overflow algorithm can carry out multiple times for watermark embedding.
As mentioned in Section 1, the shortcomings in terms of embedding capacity and visual quality of the watermarked images in previous schemes is identified. At the same time, the existing algorithms do not deal with overflow very well. In order to solve the problems of bad imperceptibility and low embedding rate of the existing algorithms, a novel large-capacity reversible image watermarking based on the DE is proposed. The proposed scheme effectively controls the overflow of pixel values when embedding watermark with the DE, and uses multi-scale decomposition technology to remove the abrupt points in each image-block. After the information is embedded into the sub-block by the GDE, the average value of the pixels in each sub-block remains unchanged to extract the watermark information effectively. The scheme is made of two procedures: watermark embedding and watermark extraction.
The embedding process of the reversible watermarking algorithm is shown in Figure 5, and the specific operation flow is described as follows:
STEP1: The watermark W' is obtained when the watermark W is scrambled by using Arnold transform [16]. The W' is converted into a one-dimensional binary sequence. To enhance robustness and security, the scrambling method is improved as follows:
(x'y')=(1112)c(2111)d(xy)mod Mx',y'∈{0,1,2,⋯,N−1} | (6) |
where (x,y) is the coordinate of the original image pixel, (x′,y′)is the transformed coordinate, M refers to the size of the image matrix, and c and d are the randomly generated scrambled numbers.
STEP2: The original image I (M×N, M and N are all integer multiples of 4 is divided into non-overlapping image sub-blocks Ii, i∈(0,M×N4×4 - 1);
STEP3: The smoothness value of each block Ii is calculated [18], then all sub-blocks are sorted in an ascending order with respect to their smoothness values, along with the establishment of a sort index table according to the sorting results. Finally, the blocks in front of the sequence are selected properly to embed the watermark in turn;
STEP4: There may be abrupt points in the selected image sub-block Ii. In this regard, the multi-scale decomposition algorithm is applied to delete the image sub-blocks with abrupt points. The effective watermark information will not be embedded into these blocks. Finally, according to the number of the embedded watermarks, the pixel blocks in front of the sequence (image sub-blocks containing abrupt points have been removed) are selected to embed watermarks properly;
STEP5: Considering the requirement of watermark embedding capacity, the pixel sub-blocks (assuming the first n blocks) in front of the sequence are selected to embed the watermark in turn. Note that, the algorithm removes the selected image sub-blocks with abrupt points (if there are m blocks). Therefore, the number of image blocks used for embedding watermark information is n-m. At the same time, the sorting sequence numbers of the image sub-blocks containing abrupt points are recorded for watermark extraction.
STEP6: The watermark is embedded into any selected sub-block Ai (0⩽i⩽n) by using the GDE (For the block size of 4×4 pixels, the 15-bit binary watermark can be embedded). When using the DE to embed watermark, the pixels beyond the range of the gray values will be marked in the binary location map having the same size as the original image. Specifically, if an overflow occurs, a value of 1 is marked at the corresponding location, and if no overflow occurs, a value of 0 is marked at the corresponding location, thereby generating an overflow location map.
STEP7: For identifying the corresponding pixel points of overflows, a transform algorithm (Section 2) is adopted to avoid the pixel overflow. The pixel value transform is carried out for the identified pixels, so that the watermark can be embedded with the GDE again. The overflowed pixel value is beyond the range of the image gray values, and the absolute difference is calculated to make the pixel value within a reasonable range.
If the pixel value generated by the DE is positive overflow, that is a, the transformed pixel value is 255−|a - 255|.
If the pixel value generated by the DE is negative overflow, that is b, the transformed pixel value is 0 + |b - 0|.
The auxiliary information (the compressed overflow graph, embedding capacity and scrambling times c and d, etc.) is hidden into the complex texture blocks.
STEP8: For the corresponding sub-blocks not used to embed the watermark in the original image I, those are the original sub-blocks with high texture complexities. Then, the last k sub-blocks are selected to realize the embedding of the auxiliary information by the difference quantization method and the selected k is saved for watermark extraction.
STEP9: The auxiliary information is embedded into each pixel in each selected pixel sub-block by difference quantization.
STEP9.1: Calculate the average pixel value of each original image pixel block corresponding to the last k sub-blocks in the sorting sequence as follows:
¯avg=⌊x1+x2+⋅⋅⋅+xm×nm×n⌋ | (7) |
where m and n refer to the row and column of the divided sub-blocks, respectively. x1,x2,…xm,xn are the pixels contained in the sub-blocks.
STEP9.2: The maximum pixel value and the minimum pixel value in each sub-block are obtained, and the auxiliary information is embedded using difference quantization. Each image sub-block can embed two bits of auxiliary information using difference quantization. In each sub-block, one bit of the auxiliary information is embedded by comparing the minimum pixel value and the average value, and then the next bit of the auxiliary information is embedded by comparing the maximum pixel value and the average value.
1) Embed the auxiliary information by comparing the minimum pixel value and the average value.
a′={a−1((⌊¯avg−a⌋)%2=1,w=1)||((⌊¯avg−a⌋)%2=0,w=0)a((⌊¯avg−a⌋)%2=1,w=0)||((⌊¯avg−a⌋)%2=0,w=1) | (8) |
where, a represents the pixel value to be embedded; ¯avg refers to the average value of the pixels in the sub-block where the embedded pixel is located; w represents the embedded binary watermark information, and % represents the operation of model.
When embedding one bit of the auxiliary information by comparing the minimum pixel value and the average value, if (⌊¯avg−a⌋)%2=1,w=1) and (⌊¯avg−a⌋)%2=0,w=0) are established at the same time, the new minimum pixel value a′ is equal to a−1; if (⌊¯avg−a⌋)%2=1,w=0) and (⌊¯avg−a⌋)%2=0,w=1) are established at the same time, the new minimum pixel value a′ is equal to a.
2) Embed the auxiliary information by comparing the maximum pixel value and the average value.
a′={a+1((⌊¯avg−a⌋)%2=1,w=1)||((⌊¯avg−a⌋)%2=0,w=0)a((⌊¯avg−a⌋)%2=1,w=0)||((⌊¯avg−a⌋)%2=0,w=1) | (9) |
When embedding one bit of the auxiliary information by comparing the maximum pixel value and the average value, if (⌊¯avg−a⌋)%2=1,w=1) and (⌊¯avg−a⌋)%2=0,w=0) are established at the same time, the new maximum pixel value a′ is equal to a + 1; if (⌊¯avg−a⌋)%2=1,w=0) and (⌊¯avg−a⌋)%2=0,w=1) are established at the same time, the new maximum pixel value a′ is equal to a.
STEP10: The watermarked image Ii is obtained by embedding the watermark information with the GDE and the differential quantization technology.
Suppose that the watermark is embedded into any pixel pair < x, y > by the DE. If the embedded watermark is 1, the value of the new pixel pair < a, b > will be:
a=⌊x+y2⌋+⌊(x−y)×2+1+12⌋ = ⌊x+y2⌋+(x−y)+1 | (10) |
b=⌊x+y2⌋−⌊(x−y)×2+12⌋ = ⌊x+y2⌋−(x−y) | (11) |
Accordingly, a - b = 2x−2y+1
Therefore, for all pixel pairs, if the one-bit embedded watermark value is 1, the difference between the new pixel pair is odd; otherwise, if the one-bit embedded watermark value is 0, the difference between the new pixel pair is even. Accordingly, when resorting the original image, if the difference between the generated pixel pair < a, b > is odd, the one-bit watermark value is 1, otherwise it is 0. Based on this, the watermark can be extracted in order.
The watermark extraction flow is shown in Figure 6. The specific operation flow is detailed as follows:
STEP1: The watermarked imageI″ (M × N, M and N are all integer multiples of 4) is divided into non-overlapping image sub-blocks I″i, i∈(0,M×N4×4 - 1);
STEP2: Calculate the smoothness value of each blockI″i, then sort all sub-blocks in an ascending order with respect to their smoothness values and create the sort index table accordingly. Since the average value of the pixels in each sub-block remains unchanged after the information is embedded into the sub-block with the GDE, the smoothness value of the image block stays unchanged before and after the watermark is embedded, and the two sort orders are the same [18].
STEP3: The posterior k sub-blocks in the sequence are selected to extract the auxiliary information by differential quantization.
The auxiliary information is extracted by the difference quantization method. The order of extracting information is opposite to that of embedding information. Recall that one bit of the auxiliary information is first embedded with a minimum pixel value, and then the next bit of the auxiliary information is embedded with a maximum pixel value. Therefore, in order to keep the parity relation between the minimum value and the average value, as well as between the maximum value and the average value, before and after embedding the auxiliary information, the maximum pixel value is first used to extract one bit of the auxiliary information, and then the minimum pixel value is used to extract another bit of the auxiliary information as well.
w={1(⌊¯avg−a⌋)%2=00(⌊¯avg−a⌋)%2=1 | (12) |
where, a represents the pixel value used to extract the watermark, ¯avg refers to the average value of the pixels in the sub-block where the embedded pixel is located, w represents the embedded binary watermark information, and % represents the operation of model.
STEP4: According to the extracted auxiliary information, an inverse GDE is used to extract the watermark from the first n blocks that are well-ordered (remove the image sub-blocks containing the abrupt points). The watermark information W is recovered by Arnold inverse transform.
STEP5: After extracting the watermark information, the recovered image I‴ is combined with the corresponding sub-blocks containing the abrupt points to obtain the image I. The image I is then combined with the image obtained after extracting the auxiliary information to obtain the final image A.
In this experiment, eight 8-bit standard gray images with the size of 512 × 512 pixels, including Lena, Barbara, Baboon, Pepper, Girl, Cameraman, Man and Couple were selected as the host images for watermark embedding (Figure 7) (all images were derived from http://sipi.suc.edu/database). A binary image with the size of 32 × 32 pixels was selected as the watermark image (Figure 8).
The integrity of the watermark embedding algorithm is generally measured by the NC (Normalized Correlation), whose analytical form is given in Eq (13):
N C=L−1∑i=0K−1∑j=0I(i, j)I′(i, j)L−1∑i=0K−1∑j=0[I(i, j)]2 | (13) |
where, I(i,j) and I′(i,j) respectively denote the pixel values at (i,j) in the original image and the carrier image of recovery after the watermark is extracted. L and K denote the numbers of rows and columns of the image respectively.
The above Table 2 indicates the integrity assessment results of the watermarked images without attacks. It indicates that the NC values of these images are the same with the value of 1 when suffering from no attacks. In the case of no attacks, if the NC value is 1, the original image is completely restored. This shows that the carrier image restored by this algorithm has high visual quality.
Image (512 × 512 pixels) | Lena | Barbara | Baboon | Peppers | Girl | Cameraman | Man | Couple |
NC | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
PSNR (Peak Signal to Noise Ratio) is currently one of the main indicators used to evaluate the visual quality of the reversible watermarking. The PSNR is formally formulated as shown in Eq (14):
PSNR=10log(25521MNM−1∑i=0N−1∑j=0[I′(i,j)−I(i,j)]2) | (14) |
where I(i,j)and I′(i,j)respectively denote the pixel values at position (i,j)in the original image and the watermarked image. Here, M and N represent the numbers of rows and columns of the image respectively.
The SSIM (structural similarity) is utilized to evaluate the quality of a stego image and the mathematical form of the SSIM is stated in Eq (15):
SSIM(x,y|w)=(2ˉwxˉwy+C1)(2σwxwy+C2)(ˉw2x+ˉw2y+C1)(σ2wx+σ2wy+C2) | (15) |
where C1and C2are small constants, ¯wx is a mean value of the regionwx, while ¯wy represents the average value of the region wy.σ2wx is the variance of wx, and σwxwy is the covariance between the two regions. A higher value of the SSIM indicates a higher quality level for the stego image. The numeric value of the SSIM lies in the range of [0, 1]. In the evaluation of the reversible information hiding algorithm, the SSIM value of the hidden image is the value closest to 1.
The values of the PSNR and SSIM are shown in Table 3 by using the algorithm proposed in this paper and those proposed in [17] and [10] after dividing sub-blocks with 4 × 4 pixels.
Image name | Proposed algorithm | literature 17 | literature 10 | |||
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |
Lena | 54.63 | 0.989 | 46.72 | 0.982 | 52.18 | 0.984 |
Baboon | 52.36 | 0.987 | 48.37 | 0.985 | 51.37 | 0.988 |
Barbara | 53.51 | 0.993 | 47.54 | 0.983 | 51.93 | 0.986 |
Peppers | 53.14 | 0.991 | 46.81 | 0.981 | 51.73 | 0.987 |
Girl | 54.27 | 0.989 | 47.13 | 0.986 | 52.43 | 0.987 |
Cameraman | 53.69 | 0.991 | 46.92 | 0.982 | 51.52 | 0.985 |
Man | 52.73 | 0.988 | 47.98 | 0.984 | 51.49 | 0.987 |
Couple | 53.12 | 0.990 | 46.85 | 0.983 | 52.17 | 0.986 |
After the watermark is embedded into the aforementioned eight images, the PSNR value of the proposed algorithm can reach as high as 54.63 dB, which is better than the algorithms in [17] and [10]. At the same time, compared with the algorithms in [17] and [10], the SSIM is also higher. As reflected from Table 3, it's not hard to see that this algorithm is better than those in [17] and [10] and achieves excellent SSIM and PSNR under the same payload capacity. It means that the proposed algorithm has better visual quality. The details are shown in Table 4.
Image name | Original images | Watermarked images | Original watermark | Extracted watermark | PSNR |
Lena | ![]() |
![]() |
![]() |
![]() |
54.63 |
Baboon | ![]() |
![]() |
![]() |
![]() |
52.36 |
Barbara | ![]() |
![]() |
![]() |
![]() |
53.51 |
Peppers | ![]() |
![]() |
![]() |
![]() |
53.14 |
Girl | ![]() |
![]() |
![]() |
![]() |
54.27 |
Cameraman | ![]() |
![]() |
![]() |
![]() |
53.69 |
Man | ![]() |
![]() |
![]() |
![]() |
52.73 |
Couple | ![]() |
![]() |
![]() |
![]() |
53.12 |
It can be witnessed from these watermarked images that the naked eye is unable to sense the presence of the watermark. The watermarked images possess good visual effects, and their corresponding PSNR values illustrate the proposed algorithm with good imperceptibility for various image types. The average PSNR value of the above eight images shown in Figure 7 after embedding the watermark is up to 53.43 dB.
It can be seen from Tables 3 and 4, the proposed algorithm has good visual invisibility for different texture types of images. In order to estimate the maximum watermark embedding capacity, the watermark needs to be embedded into all sub-blocks (except for the block containing the abrupt points). In this paper, the algorithm can be embedded many times under the premise of guaranteeing a certain visual quality. Here, an example of one-time embedding is provided.
As shown in Table 5, 10, 30, 70, 90 and 100% refer to the proportion of the watermark capacity to be embedded with regard to the maximum embedding capacity. The PSNR and SSIM are used to evaluate the visual quality of the watermarked images when embedding 10, 30, 70, 90 and 100% of the maximum embedding capacity. In Table 5, it is obvious that the proposed reversible watermarking technique outperforms the techniques in [17] and [18] with competitive SSIM and PSNR values in terms of payload capacity. The results show that the proposed technology not only guarantees the visual quality of the watermarked images, but also significantly improves the payload capacity.
Image name | Algorithms | Embedding capacity (bit) | SSIM | PSNR | ||||
10% | 30% | 70% | 90% | 100% | ||||
Lena | Literature 18 | 276,599 | 0.7331 | 54.37 | 42.92 | 36.83 | 32.91 | 31.95 |
Literature 17 | 9767 | 0.9188 | 45.78 | 43.48 | 41.58 | 39.78 | 38.64 | |
Proposed | 236,432 | 0.7935 | 48.65 | 45.24 | 38.91 | 37.05 | 36.34 | |
Baboon | Literature 18 | 41,519 | 0.8232 | 51.25 | 42.43 | 34.94 | 33.48 | 31.02 |
Literature 17 | 11,056 | 0.9047 | 45.86 | 43.84 | 41.95 | 40.56 | 39.76 | |
Proposed | 224,512 | 0.8217 | 47.44 | 44.65 | 38.77 | 37.21 | 36.42 | |
Barbara | Literature 18 | 392,740 | 0.6724 | 50.12 | 48.54 | 46.80 | 45.11 | 31.65 |
Literature 17 | 13,232 | 0.8931 | 45.66 | 42.89 | 39.54 | 37.10 | 36.65 | |
Proposed | 232,864 | 0.8011 | 47.33 | 44.14 | 38.47 | 37.28 | 36.17 | |
Peppers |
Literature 18 | 138,695 | 0.8154 | 52.62 | 42.98 | 35.12 | 33.58 | 31.53 |
Literature 17 | 8562 | 0.9369 | 46.24 | 44.77 | 41.82 | 39.54 | 38.26 | |
Proposed | 232,434 | 0.8115 | 48.13 | 44.35 | 38.54 | 36.79 | 35.84 | |
Girl | Literature 18 | 293,674 | 0.7323 | 53.14 | 42.17 | 36.32 | 32.33 | 31.35 |
Literature 17 | 14,876 | 0.9157 | 45.31 | 42.86 | 41.03 | 39.25 | 38.12 | |
Proposed | 224,796 | 0.7938 | 48.09 | 44.71 | 38.20 | 36.51 | 35.96 | |
Cameraman | Literature 18 | 179,643 | 0.8123 | 52.32 | 42.36 | 34.42 | 33.97 | 31.02 |
Literature 17 | 12,392 | 0.9219 | 46.03 | 44.10 | 41.25 | 39.05 | 37.72 | |
Proposed | 236,562 | 0.8112 | 47.67 | 43.82 | 38.09 | 36.29 | 35.33 | |
Man | Literature 18 | 159,427 | 0.8032 | 51.47 | 41.49 | 33.35 | 32.99 | 30.42 |
Literature 17 | 10,074 | 0.9182 | 45.91 | 43.85 | 41.07 | 38.79 | 37.41 | |
Proposed | 224,862 | 0.8112 | 47.67 | 43.82 | 38.09 | 36.29 | 35.33 | |
Couple | Literature 18 | 233,912 | 0.7638 | 52.38 | 42.67 | 34.84 | 33.37 | 31.22 |
Literature 17 | 12,128 | 0.9298 | 46.11 | 44.23 | 41.54 | 39.25 | 37.83 | |
Proposed | 235,956 | 0.8231 | 47.98 | 44.09 | 38.26 | 36.48 | 35.53 |
Table 6 shows the experimental results of the calculated PSNR values, the SSIM values of the original image and the restored image after extracting the watermark, and the SSIM values of the original watermark and the extracted watermark by the three algorithms. It can be seen that the PSNR value of the watermarked image obtained by this algorithm is higher, especially much higher than that of the methods in [19]. Furthermore, the original image can be recovered after extracting the watermark, and the similarity between the extracted watermark and the original watermark is same. Compared with the methods in [19–21], the proposed algorithm has improved significantly.
Evaluating indicators | Images | Algorithms | |||
Literature 19 | Literature 20 | Literature 21 | Proposed | ||
PSNR (watermarked images) | Lena | 46.03 | 50.60 | 48.05 | 54.63 |
Baboon | 41.22 | 48.64 | 49.22 | 52.36 | |
Barbara | 46.15 | 47.22 | 49.01 | 53.51 | |
Peppers | 42.05 | 41.31 | 48.20 | 53.14 | |
Girl | 45.71 | 49.63 | 48.41 | 54.27 | |
Cameraman | 43.35 | 43.29 | 48.77 | 53.69 | |
Man | 43.28 | 43.25 | 48.59 | 52.73 | |
Couple | 42.11 | 41.25 | 48.08 | 53.12 | |
SSIM (original image and recovered image after extracting) | Lena | 1 | 0.9975 | 0.9945 | 1 |
Baboon | 0.9999 | 0.9995 | 0.9994 | 1 | |
Barbara | 0.9999 | 0.9995 | 0.9982 | 1 | |
Peppers | 0.9999 | 0.9954 | 0.9964 | 1 | |
Girl | 1 | 0.9987 | 0.9954 | 1 | |
Cameraman | 0.9999 | 0.9969 | 0.9968 | 1 | |
Man | 0.9999 | 0.9973 | 0.9965 | 1 | |
Couple | 0.9999 | 0.9964 | 0.9962 | 1 | |
SSIM (original watermark and extracted watermark) | Lena | 1 | 0.9905 | 0.9835 | 1 |
Baboon | 1 | 0.9738 | 0.9978 | 1 | |
Barbara | 1 | 0.9676 | 0.9845 | 1 | |
Peppers | 1 | 0.9655 | 0.9878 | 1 | |
Girl | 1 | 0.9836 | 0.9803 | 1 | |
Cameraman | 1 | 0.9752 | 0.9845 | 1 | |
Man | 1 | 0.9749 | 0.9842 | 1 | |
Couple | 1 | 0.9652 | 0.9875 | 1 |
In this paper, 3 fixed embedding rates of 0.5, 1.0 and 1.2 bpp were selected for comparing the corresponding PSNRs, and the results are shown in Table 7. The "—" in Table 7 indicates that the corresponding experimental measurement cannot be carried out because of the characteristics of the image itself. It can be seen from Table 7, in the case of the same selected embedding rate, when the embedding rate is 0.5 bpp, the average PSNR of the proposed algorithm is 2.67 dB higher than that in [22], which is 1.93 dB higher than that in [23]. With the increase of the embedding rate, the average PSNR of the proposed algorithm is 2.6 dB higher than that in [22] when reaching 1.0 bpp, which is 1.6 dB higher than that in [23]. When the same embedding rate is 1.2 bpp, the average PSNR of the proposed algorithm is 2.5 dB higher than that in [22], which is 0.7 dB higher than that in literature [23]. The main reason for this difference is that the algorithm in [22] uses a reversible image watermarking based on the histogram shift interpolation technique. The algorithm obtains a better performance in the small embedding amount because of the more concentrated prediction error histogram. The host image exhibits a vast proportion of pixels in the lightest or darkest end. Due to the lack of a gray overflow control scheme, the auxiliary information for the host images cannot be completely saved, therefore leading to an embedding failure. In [23], the watermarks were mostly distributed in the local area with strong watermark embedding ability, and the watermarks were rarely distributed in the local area with weak watermark embedding ability. It inevitably produces distortion after iterative processing, thus affecting the image quality. Therefore, the proposed algorithm achieves a better image visual quality and is suitable for embedding high-capacity watermark information.
Embedding rate | Original images | Lena | Baboon | Barbara | Peppers | Girl | Cameraman | Man | Couple |
0.5 bpp | Literature 22 | 42.5 | 33.9 | 42.8 | -- | -- | -- | -- | -- |
Literature 23 | 42.7 | 38.3 | 42.9 | -- | 41.5 | 41.1 | 40.9 | -- | |
Proposed | 41.5 | 40.9 | 41.8 | 41.4 | 41.4 | 41.2 | 41.1 | 41.4 | |
1.0 bpp | Literature 22 | 33.2 | -- | 34.3 | -- | -- | 33.7 | -- | -- |
Literature 23 | 34.7 | -- | 35.2 | -- | 34.9 | -- | -- | -- | |
Proposed | 36.3 | 36.2 | 36.4 | 35.8 | 36.3 | 35.9 | 35.8 | 35.9 | |
1.2 bpp | Literature 22 | -- | -- | 30.9 | -- | -- | -- | -- | -- |
Literature 23 | 32.1 | -- | 32.8 | -- | 31.8 | 32.5 | 32.4 | -- | |
Proposed | 33.2 | 32.8 | 33.4 | 32.9 | 33.2 | 32.8 | 32.9 | 32.8 |
This paper presents an improved large-capacity reversible image watermarking based on the DE. The algorithm can effectively improve the embedding rate and visual quality. The proposed scheme effectively controls the overflow of pixel values when embedding watermark with the DE, and uses multi-scale decomposition technology to remove the abrupt points in each image-block. On the one hand, when embedding watermark, the GDE has higher embedding amount than the DE. On the other hand, the overflow processing method proposed in this paper can improve the embedding capacity without locating the overflow pixels. There is no need to eliminate the overflow pixels which may be generated when using the GDE to embed the watermark. The GDE is used to embed the watermark directly without considering the overflow of pixels. Keeping certain visual quality, the overflow algorithm can carry out multiple times for watermark embedding. After extracting the embedded watermark, the algorithm can recover the original image without any loss. Compared with other algorithms, the main contribution of this proposed algorithm is to enhance the embedding rate while maintaining promising visual quality. The simulation results showed that the effective watermark capacity embedded in this algorithm was superior to other algorithms under the premise of maintaining good visual quality.
This work is supported by the National Statistical Science Research Project (2018LY12), and Six Talent Peaks Project in Jiangsu Province (XYDXXJS-011).
All authors declare no conflicts of interest in this paper.
[1] |
E. Aličković, A. Subasi, Breast cancer diagnosis using GA feature selection and Rotation Forest, Neural Comput. Appl., 28 (2015), 753–763. https://doi.org/10.1007/s00521-015-2103-9 doi: 10.1007/s00521-015-2103-9
![]() |
[2] | World Health Organization, Breast cancer 2021, 2021. Available from: https://www.who.int/news-room/fact-sheets/detail/breast-cancer. |
[3] |
Y. S. Sun, Z. Zhao, Z. N. Yang, F. Xu, H. J. Lu, Z. Y. Zhu, et al., Risk factors and preventions of breast cancer, Int. J. Biol. Sci., 13 (2017), 1387–1397. https://doi.org/10.7150/ijbs.21635 doi: 10.7150/ijbs.21635
![]() |
[4] |
J. B. Harford, Breast-cancer early detection in low-income and middle-income countries: Do what you can versus one size fits all, Lancet Oncol., 12 (2011), 306–312. https://doi.org/10.1016/s1470-2045(10)70273-4 doi: 10.1016/s1470-2045(10)70273-4
![]() |
[5] |
C. Lerman, M. Daly, C. Sands, A. Balshem, E. Lustbader, T. Heggan, et al., Mammography adherence and psychological distress among women at risk for breast cancer, J. Natl. Cancer Inst., 85 (1993), 1074–1080. https://doi.org/10.1093/jnci/85.13.1074 doi: 10.1093/jnci/85.13.1074
![]() |
[6] |
P. T. Huynh, A. M. Jarolimek, S. Daye, The false-negative mammogram, Radiographics, 18 (1998), 1137–1154. https://doi.org/10.1148/radiographics.18.5.9747612 doi: 10.1148/radiographics.18.5.9747612
![]() |
[7] | M. G. Ertosun, D. L. Rubin, Probabilistic Visual Search for Masses within mammography images using Deep Learning, in 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), (2015), 1310–1315. https://doi.org/10.1109/bibm.2015.7359868 |
[8] | Y. Lu, J. Y. Li, Y. T. Su, A. A. Liu, A review of breast cancer detection in medical images, in 2018 IEEE Visual Communications and Image Processing, (2018), 1–4. https://doi.org/10.1109/vcip.2018.8698732 |
[9] |
J. Ferlay, I. Soerjomataram, R. Dikshit, S. Eser, C. Mathers, M. Rebelo, et al., Cancer incidence and mortality worldwide: Sources, methods and major patterns in Globocan 2012, Int. J. Cancer, 136 (2014), E359–E386. https://doi.org/10.1002/ijc.29210 doi: 10.1002/ijc.29210
![]() |
[10] |
N. Mao, P. Yin, Q. Wang, M. Liu, J. Dong, X. Zhang, et al., Added value of Radiomics on mammography for breast cancer diagnosis: A feasibility study, J. Am. Coll. Radiol., 16 (2019), 485–491. https://doi.org/10.1016/j.jacr.2018.09.041 doi: 10.1016/j.jacr.2018.09.041
![]() |
[11] |
H. Wang, J. Feng, Q. Bu, F. Liu, M. Zhang, Y. Ren, et al., Breast mass detection in digital mammogram based on Gestalt Psychology, J. Healthc. Eng., 2018 (2018), 1–13. https://doi.org/10.1155/2018/4015613 doi: 10.1155/2018/4015613
![]() |
[12] |
S. McGuire, World cancer report 2014, Switzerland: World Health Organization, international agency for research on cancer, Adv. Nutrit. Int. Rev., 7 (2016), 418–419. https://doi.org/10.3945/an.116.012211 doi: 10.3945/an.116.012211
![]() |
[13] |
M. K. Gupta, P. Chandra, A comprehensive survey of Data Mining, Int. J. Comput. Technol., 12 (2020), 1243–1257. https://doi.org/10.1007/s41870-020-00427-7 doi: 10.1007/s41870-020-00427-7
![]() |
[14] | T. Zou, T. Sugihara, Fast identification of a human skeleton-marker model for motion capture system using stochastic gradient descent method, in 2020 8th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob)., (2020), 181–186. https://doi.org/10.1109/biorob49111.2020.9224442 |
[15] |
A. Reisizadeh, A. Mokhtari, H. Hassani, R. Pedarsani, An exact quantized decentralized gradient descent algorithm, IEEE Trans. Signal Process., 67 (2019), 4934–4947. https://doi.org/10.1109/tsp.2019.2932876 doi: 10.1109/tsp.2019.2932876
![]() |
[16] |
D. Maulud, A. M. Abdulazeez, A review on linear regression comprehensive in machine learning, J. Appl. Sci. Technol. Trends, 1 (2020), 140–147. https://doi.org/10.38094/jastt1457 doi: 10.38094/jastt1457
![]() |
[17] |
D. R. Wilson, T. R. Martinez, The general inefficiency of batch training for gradient descent learning, Neural Networks, 16 (2003) 1429–1451. https://doi.org/10.1016/s0893-6080(03)00138-2 doi: 10.1016/s0893-6080(03)00138-2
![]() |
[18] |
D. Yi, S. Ji, S. Bu, An enhanced optimization scheme based on gradient descent methods for machine learning, Symmetry, 11 (2019), 942. https://doi.org/10.3390/sym11070942 doi: 10.3390/sym11070942
![]() |
[19] |
D. A. Zebari, D. Q. Zeebaree, A. M. Abdulazeez, H. Haron, H. N. Hamed, Improved threshold based and trainable fully automated segmentation for breast cancer boundary and pectoral muscle in mammogram images, IEEE Access, 8 (2020), 203097–203116. https://doi.org/10.1109/access.2020.3036072 doi: 10.1109/access.2020.3036072
![]() |
[20] | D. Q. Zeebaree, H. Haron, A. M. Abdulazeez, D. A. Zebari, Trainable model based on new uniform LBP feature to identify the risk of the breast cancer, in 2019 International Conference on Advanced Science and Engineering (ICOASE), 2019. https://doi.org/10.1109/icoase.2019.8723827 |
[21] |
D. Q. Zeebaree, A. M. Abdulazeez, L. M. Abdullrhman, D. A. Hasan, O. S. Kareem, The prediction process based on deep recurrent neural networks: A Review, Asian J. Comput. Inf. Syst., 10 (2021), 29–45. https://doi.org/10.9734/ajrcos/2021/v11i230259 doi: 10.9734/ajrcos/2021/v11i230259
![]() |
[22] |
D. Q. Zeebaree, A. M. Abdulazeez, D. A. Zebari, H. Haron, H. N. A. Hamed, Multi-level fusion in ultrasound for cancer detection based on uniform LBP features, Comput. Matern. Contin., 66 (2021), 3363–3382. https://doi.org/10.32604/cmc.2021.013314 doi: 10.32604/cmc.2021.013314
![]() |
[23] |
M. Muhammad, D. Zeebaree, A. M. Brifcani, J. Saeed, D. A. Zebari, A review on region of interest segmentation based on clustering techniques for breast cancer ultrasound images, J. Appl. Sci. Technol. Trends, 1 (2020), 78–91. https://doi.org/10.38094/jastt1328 doi: 10.38094/jastt1328
![]() |
[24] |
P. Kamsing, P. Torteeka, S. Yooyen, An enhanced learning algorithm with a particle filter-based gradient descent optimizer method, Neural Comput. Appl., 32 (2020), 12789–12800. https://doi.org/10.1007/s00521-020-04726-9 doi: 10.1007/s00521-020-04726-9
![]() |
[25] |
Y. Hamid, L. Journaux, J. A. Lee, M. Sugumaran, A novel method for network intrusion detection based on nonlinear SNE and SVM, J. Artif. Intell. Soft Comput. Res., 6 (2018), 265. https://doi.org/10.1504/ijaisc.2018.097280 doi: 10.1504/ijaisc.2018.097280
![]() |
[26] | H. Sadeeq, A. M. Abdulazeez, Hardware implementation of firefly optimization algorithm using fpgas, in 2018 International Conference on Advanced Science and Engineering, (2018), 30–35. https://doi.org/10.1109/icoase.2018.8548822 |
[27] | D. P. Hapsari, I. Utoyo, S. W. Purnami, Fractional gradient descent optimizer for linear classifier support vector machine, in 2020 Third International Conference on Vocational Education and Electrical Engineering (ICVEE), (2020), 1–5. |
[28] |
M. S. Nawaz, B. Shoaib, M. A. Ashraf, Intelligent cardiovascular disease prediction empowered with gradient descent optimization, Heliyon, 7 (2021), 1–10. https://doi.org/10.1016/j.heliyon.2021.e06948 doi: 10.1016/j.heliyon.2021.e06948
![]() |
[29] |
Y. Qian, Exploration of machine algorithms based on deep learning model and feature extraction, J. Math. Biosci. Eng., 18 (2021), 7602–7618. https://doi.org/10.3934/mbe.2021376 doi: 10.3934/mbe.2021376
![]() |
[30] |
Z. Wang, M. Li, H. Wang, H. Jiang, Y. Yao, H. Zhang, et al., Breast cancer detection using extreme learning machine based on feature fusion with CNN deep features, IEEE Access, 7 (2019), 105146–105158. https://doi.org/10.1109/access.2019.2892795 doi: 10.1109/access.2019.2892795
![]() |
[31] | UCI Machine Learning Repository, Breast Cancer Wisconsin (Diagnostic) Data Set. Available from: https://archive.ics.uci.edu/ml/datasets/Breast+ Cancer + Wisconsin + (Diagnostic). |
[32] |
R. V. Anji, B. Soni, R. K. Sudheer, Breast cancer detection by leveraging machine learning, ICT Express, 6 (2020), 320–324. https://doi.org/10.1016/j.icte.2020.04.009 doi: 10.1016/j.icte.2020.04.009
![]() |
[33] |
Z. Salod, Y. Singh, Comparison of the performance of machine learning algorithms in breast cancer screening and detection: A Protocol, J. Public Health Res., 8 (2019). https://doi.org/10.4081/jphr.2019.1677 doi: 10.4081/jphr.2019.1677
![]() |
[34] |
Y. Lin, H. Luo, D. Wang, H. Guo, K. Zhu, An ensemble model based on machine learning methods and data preprocessing for short-term electric load forecasting, Energies, 10 (2017), 1186. https://doi.org/10.3390/en10081186 doi: 10.3390/en10081186
![]() |
[35] | M. Amrane, S. Oukid, I. Gagaoua, T. Ensari, Breast cancer classification using machine learning, in 2018 Electric Electronics, Computer Science, Biomedical Engineerings' Meeting (EBBT), (2018), 1–4. https://doi.org/10.1109/ebbt.2018.8391453 |
[36] |
R. Sumbaly, N. Vishnusri, S. Jeyalatha, Diagnosis of breast cancer using decision tree data mining technique, Int. J. Comput. Appl., 98 (2014), 16–24. https://doi.org/10.5120/17219-7456 doi: 10.5120/17219-7456
![]() |
[37] |
B. Zheng, S. W. Yoon, S. S. Lam, Breast cancer diagnosis based on feature extraction using a hybrid of k-means and support vector machine algorithms, Expert Syst. Appl., 41 (2014), 1476–1482. https://doi.org/10.1016/j.eswa.2013.08.044 doi: 10.1016/j.eswa.2013.08.044
![]() |
[38] |
T. Araújo, G. Aresta, E. Castro, J. Rouco, P. Aguiar, C. Eloy, et al., Classification of breast cancer histology images using convolutional neural networks, Plos One, 12 (2017), e0177544. https://doi.org/10.1371/journal.pone.0177544 doi: 10.1371/journal.pone.0177544
![]() |
[39] | S. P. Rajamohana, A. Dharani, P. Anushree, B. Santhiya, K. Umamaheswari, Machine learning techniques for healthcare applications: early autism detection using ensemble approach and breast cancer prediction using SMO and IBK, in Cognitive Social Mining Applications in Data Analytics and Forensics, (2019), 236–251. https://doi.org/10.4018/978-1-5225-7522-1.ch012 |
[40] |
L. G. Ahmad, Using three machine learning techniques for predicting breast cancer recurrence, J. Health Med. Inf., 4 (2013), 10–15. https://doi.org/10.4172/2157-7420.1000124 doi: 10.4172/2157-7420.1000124
![]() |
[41] |
B. Padmapriya, T. Velmurugan, Classification algorithm based analysis of breast cancer data, Int. J. Data Min. Tech. Appl., 5 (2016), 43–49. https://doi.org/10.20894/ijdmta.102.005.001.010 doi: 10.20894/ijdmta.102.005.001.010
![]() |
[42] | S. Bharati, M. A. Rahman, P. Podder, Breast cancer prediction applying different classification algorithm with comparative analysis using Weka, in 2018 4th International Conference on Electrical Engineering and Information & Communication Technology (ICEEiCT), (2018), 581–584. https://doi.org/10.1109/ceeict.2018.8628084 |
[43] |
K. Williams, P. A. Idowu, J. A. Balogun, A. I. Oluwaranti, Breast cancer risk prediction using data mining classification techniques, Trans. Networks Commun., 3 (2015), 17–23. https://doi.org/10.14738/tnc.32.662 doi: 10.14738/tnc.32.662
![]() |
[44] | P. Mekha, N. Teeyasuksaet, Deep learning algorithms for predicting breast cancer based on tumor cells, in 2019 Joint International Conference on Digital Arts, Media and Technology with ECTI Northern Section Conference on Electrical, Electronics, Computer and Telecommunications Engineering (ECTI DAMT-NCON), 2019. https://doi.org/10.1109/ecti-ncon.2019.8692297 |
[45] | C. Shah, A. G. Jivani, Comparison of data mining classification algorithms for breast cancer prediction, in 2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT), 2013. https://doi.org/10.1109/icccnt.2013.6726477 |
[46] |
A. A. Bataineh, A comparative analysis of nonlinear machine learning algorithms for breast cancer detection, Int. J. Mach. Learn. Comput., 9 (2019), 248–254. https://doi.org/10.18178/ijmlc.2019.9.3.794 doi: 10.18178/ijmlc.2019.9.3.794
![]() |
[47] | M. S. M. Prince, A. Hasan, F. M. Shah, An efficient ensemble method for cancer detection, in 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT), 2019. https://doi.org/10.1109/icasert.2019.8934817 |
[48] |
S. Aruna, A novel SVM based CSSFFS feature selection algorithm for Detecting Breast Cancer, Int. J. Comput., 31 (2011), 14–20. https://doi.org/10.5120/3844-5346 doi: 10.5120/3844-5346
![]() |
[49] |
G. Carneiro, J. Nascimento, A. P. Bradley, Automated analysis of unregistered Multi-View Mammograms with deep learning, IEEE Trans. Med. Imaging, 36 (2017), 2355–2365. https://doi.org/10.1109/tmi.2017.2751523 doi: 10.1109/tmi.2017.2751523
![]() |
[50] |
Z. Sha, L. Hu, B. D. Rouyendegh, Deep learning and optimization algorithms for Automatic Breast Cancer Detection, Int. J. Imaging Syst. Technol., 30 (2020), 495–506. https://doi.org/10.1002/ima.22400 doi: 10.1002/ima.22400
![]() |
[51] |
M. Mahmoud, Breast cancer classification in histopathological images using convolutional neural network, Int. J. Comput. Sci. Appl., 9 (2018), 12–15. https://doi.org/10.14569/ijacsa.2018.090310 doi: 10.14569/ijacsa.2018.090310
![]() |
[52] |
Z. Jiao, X. Gao, Y. Wang, J. Li, A deep feature based framework for Breast Masses classification, Neurocomputing, 197 (2016), 221–231. https://doi.org/10.1016/j.neucom.2016.02.060 doi: 10.1016/j.neucom.2016.02.060
![]() |
[53] |
M. H. Yap, G. Pons, J. Marti, S. Ganau, M. Sentis, R. Zwiggelaar, et al., Automated breast ultrasound lesions detection using convolutional neural networks, IEEE. J. Biomed. Health Inf., 22 (2018), 1218–1226. https://doi.org/10.1109/jbhi.2017.2731873 doi: 10.1109/jbhi.2017.2731873
![]() |
[54] |
N. Wahab, A. Khan, Y. S. Lee, Transfer learning based deep CNN for segmentation and detection of mitoses in breast cancer histopathological images, Microscopy, 68 (2019), 216–233. https://doi.org/10.1093/jmicro/dfz002 doi: 10.1093/jmicro/dfz002
![]() |
[55] |
Z. Wang, G. Yu, Y. Kang, Y. Zhao, Q. Qu, Breast tumor detection in digital mammography based on Extreme Learning Machine, Neurocomputing, 128 (2014), 175–184. https://doi.org/10.1016/j.neucom.2013.05.053 doi: 10.1016/j.neucom.2013.05.053
![]() |
[56] |
Y. Qiu, Y. Wang, S. Yan, M. Tan, S. Cheng, H. Liu, et al., An initial investigation on developing a new method to predict short-term breast cancer risk based on Deep Learning Technology, Comput. Aided. Des., 2016. https://doi.org/10.1117/12.2216275 doi: 10.1117/12.2216275
![]() |
[57] |
X. W. Chen, X. Lin, Big data deep learning: Challenges and perspectives, IEEE Access, 2 (2014), 514–525. https://doi.org/10.1109/access.2014.2325029 doi: 10.1109/access.2014.2325029
![]() |
[58] |
J. Arevalo, F. A. González, R. R. Pollán, J. L. Oliveira, M. A. G. Lopez, Representation learning for mammography mass lesion classification with convolutional neural networks, Comput. Methods Programs Biomed., 127 (2016), 248–257. https://doi.org/10.1016/j.cmpb.2015.12.014 doi: 10.1016/j.cmpb.2015.12.014
![]() |
[59] |
Y. Kumar, A. Aggarwal, S. Tiwari, K. Singh, An efficient and robust approach for biomedical image retrieval using zernike moments, Biomed. Signal Process. Control, 39 (2018), 459–473. https://doi.org/10.1016/j.bspc.2017.08.018 doi: 10.1016/j.bspc.2017.08.018
![]() |
[60] |
K. Kalaiarasi, R. Soundaria, N. Kausar, P. Agarwal, H. Aydi, H. Alsamir, Optimization of the average monthly cost of an EOQ inventory model for deteriorating items in machine learning using Python, Therm. Sci., 25 (2021), 347–358. https://doi.org/10.2298/tsci21s2347k doi: 10.2298/tsci21s2347k
![]() |
[61] |
M. Franulović, K. Marković, A. Trajkovski, Calibration of material models for the human cervical spine ligament behaviour using a genetic algorithm, Facta Univ., Series: Mechan. Eng., 19 (2021) 751. https://doi.org/10.22190/fume201029023f doi: 10.22190/fume201029023f
![]() |
[62] |
M. Fayaz, D. H. Kim, A prediction methodology of energy consumption based on Deep Extreme Learning Machine and comparative analysis in residential buildings, Electronics, 7 (2018), 222. https://doi.org/10.3390/electronics7100222 doi: 10.3390/electronics7100222
![]() |
[63] |
G. B. Huang, D. H. Wang, Y. Lan, Extreme learning machines: A survey, Int. J. Mach. Learn. Cybern., 2 (2011), 107–122. https://doi.org/10.1007/s13042-011-0019-y doi: 10.1007/s13042-011-0019-y
![]() |
[64] |
H. Tang, S. Gao, L. Wang, X. Li, B. Li, S. Pang, A novel intelligent fault diagnosis method for rolling bearings based on Wasserstein generative adversarial network and Convolutional Neural Network under Unbalanced Dataset, Sensors, 21 (2021), 6754. https://doi.org/10.3390/s21206754 doi: 10.3390/s21206754
![]() |
[65] |
J. Wei, H. Liu, G. Yan, F. Sun, Multi-modal deep extreme learning machine for robotic grasping recognition, Proceed. Adapt., Learn. Optim., (2016), 223–233. https://doi.org/10.1007/978-3-319-28373-9_19 doi: 10.1007/978-3-319-28373-9_19
![]() |
[66] |
N. S. Naz, M. A. Khan, S. Abbas, A. Ather, S. Saqib, Intelligent routing between capsules empowered with deep extreme machine learning technique, SN Appl. Sci., 2 (2019), 1–14. https://doi.org/10.1007/s42452-019-1873-6 doi: 10.1007/s42452-019-1873-6
![]() |
[67] |
J. Cai, J. Luo, S. Wang, S. Yang, Feature selection in Machine Learning: A new perspective, Neurocomputing, 300 (2018), 70–79. https://doi.org/10.1016/j.neucom.2017.11.077 doi: 10.1016/j.neucom.2017.11.077
![]() |
[68] |
L. M. Abualigah, A. T. Khader, E. S. Hanandeh, A new feature selection method to improve the document clustering using particle swarm optimization algorithm, J. Comput. Sci., 25 (2018), 456–466. https://doi.org/10.1016/j.jocs.2017.07.018 doi: 10.1016/j.jocs.2017.07.018
![]() |
[69] |
P. A. Flach, ROC analysis, encyclopedia of machine learning and data mining, Encycl. Mach. Learn. Data Min., (2016), 1–8. https://doi.org/10.1007/978-1-4899-7502-7_739-1 doi: 10.1007/978-1-4899-7502-7_739-1
![]() |
[70] |
Q. Wuniri, W. Huangfu, Y. Liu, X. Lin, L. Liu, Z. Yu, A generic-driven wrapper embedded with feature-type-aware hybrid bayesian classifier for breast cancer classification, IEEE Access, 7 (2019), 119931–119942. https://doi.org/10.1109/access.2019.2932505 doi: 10.1109/access.2019.2932505
![]() |
[71] |
J. Zheng, D. Lin, Z. Gao, S. Wang, M. He, J. Fan, Deep Learning assisted efficient ADABOOST algorithm for breast cancer detection and early diagnosis, IEEE Access, 8 (2020), 96946–96954. https://doi.org/10.1109/access.2020.2993536 doi: 10.1109/access.2020.2993536
![]() |
[72] |
X. Zhang, D. He, Y. Zheng, H. Huo, S. Li, R. Chai, et al., Deep learning based analysis of breast cancer using advanced ensemble classifier and linear discriminant analysis, IEEE Access, 8 (2020), 120208–120217. https://doi.org/10.1109/access.2020.3005228 doi: 10.1109/access.2020.3005228
![]() |
[73] |
Y. Yari, T. V. Nguyen, H. T. Nguyen, Deep learning applied for histological diagnosis of breast cancer, IEEE Access, 8 (2020), 162432–162448. https://doi.org/10.1109/access.2020.3021557 doi: 10.1109/access.2020.3021557
![]() |
[74] |
A. H. Osman, H. M. Aljahdali, An effective of ensemble boosting learning method for breast cancer virtual screening using neural network model, IEEE Access, 8 (2020), 39165–39174. https://doi.org/10.1109/access.2020.2976149 doi: 10.1109/access.2020.2976149
![]() |
[75] |
Y. Li, J. Wu, Q. Wu, Classification of breast cancer histology images using multi-size and discriminative patches based on Deep Learning, IEEE Access, 7 (2019), 21400–21408. https://doi.org/10.1109/access.2019.2898044 doi: 10.1109/access.2019.2898044
![]() |
[76] |
D. M. Vo, N. Q. Nguyen, S. W. Lee, Classification of breast cancer histology images using incremental boosting convolution networks, Inf. Sci., 482 (2019), 123–138. https://doi.org/10.1016/j.ins.2018.12.089 doi: 10.1016/j.ins.2018.12.089
![]() |
[77] |
S. Y. Siddiqui, M. A. Khan, S. Abbas, F. Khan, Smart occupancy detection for road traffic parking using deep extreme learning machine, J. K.S.U. Comput. Inf. Sci., 34 (2022), 727–733. https://doi.org/10.1016/j.jksuci.2020.01.016 doi: 10.1016/j.jksuci.2020.01.016
![]() |
[78] |
M. A. Khan, S. Abbas, K. M. Khan, M. A. A. Ghamdi, A. Rehman, Intelligent forecasting model of covid-19 novel coronavirus outbreak empowered with deep extreme learning machine, Comput. Matern. Contin., 64 (2020), 1329–1342. https://doi.org/10.32604/cmc.2020.011155 doi: 10.32604/cmc.2020.011155
![]() |
[79] |
S. Abbas, M. A. Khan, L. E. F. Morales, A. Rehman, Y. Saeed, Modelling, simulation and optimization of power plant energy sustainability for IoT enabled smart cities empowered with deep extreme learning machine, IEEE Access, 8 (2020), 39982–39997. https://doi.org/10.1109/ACCESS.2020.2976452 doi: 10.1109/ACCESS.2020.2976452
![]() |
[80] |
A. Rehman, A. Athar, M. A. Khan, S. Abbas, A. Fatima, M. Zareei, et al., Modelling, simulation, and optimization of diabetes type ii prediction using deep extreme learning machine, J. Ambient Intell. Smart Environ., 12 (2020), 125–138. https://doi.org/10.3233/AIS-200554 doi: 10.3233/AIS-200554
![]() |
[81] |
A. Haider, M. A. Khan, A. Rehman, H. S. Kim, A real-time sequential deep extreme learning machine cybersecurity intrusion detection system, Comput. Matern. Contin., 66 (2021), 1785–1798. https://doi.org/10.32604/cmc.2020.013910 doi: 10.32604/cmc.2020.013910
![]() |
[82] |
M. A. Khan, A. Rehman, K. M. Khan, M. A. A. Ghamdi, S. H. Almotiri, Enhance intrusion detection in computer networks based on deep extreme learning machine, Comput. Matern. Contin., 66 (2021), 467–480. https://doi.org/10.32604/cmc.2020.013121 doi: 10.32604/cmc.2020.013121
![]() |
[83] |
U. Ahmed, G. F. Issa, M. A. Khan, S. Aftab, M. F. Khan, R. A. T. Said, et al., Prediction of diabetes empowered with fused machine learning, IEEE Access, 10 (2022), 8529–8538. https://doi.org/10.1109/ACCESS.2022.3142097 doi: 10.1109/ACCESS.2022.3142097
![]() |
[84] |
S. Y. Siddiqui, A. Haider, T. M. Ghazal, M. A. Khan, I. Naseer, S. Abbas, et al., IoMT cloud-based intelligent prediction of breast cancer stages empowered with deep learning, IEEE Access, 9 (2021), 146478–146491. https://doi.org/10.1109/ACCESS.2021.3123472 doi: 10.1109/ACCESS.2021.3123472
![]() |
[85] |
M. Ahmad, M. Alfayad, S. Aftab, M. A. Khan, A. Fatima, B. Shoaib, et.al., Data and machine learning fusion architecture for cardiovascular disease prediction, Comput. Matern. Contin., 69 (2021), 2717–2731. https://doi.org/10.32604/cmc.2021.019013 doi: 10.32604/cmc.2021.019013
![]() |
1. | Ranjana Dwivedi, Vinay Kumar Srivastava, HSWmark: Robust and reversible image watermarking with improved capacity using Quad-tree segmentation, 2024, 27, 1433-7541, 10.1007/s10044-024-01370-0 | |
2. | Xiuli Chai, Gongyao Cao, Zhifeng Fu, Zhihua Gan, Binjie Wang, Yushu Zhang, High-capacity reversible data hiding in encrypted medical images using adaptive pixel-modulation and HBP-RMC, 2024, 95, 17468094, 106424, 10.1016/j.bspc.2024.106424 | |
3. | Zhifeng Fu, Xiuli Chai, Zongwei Tang, Xin He, Zhihua Gan, Gongyao Cao, Adaptive embedding combining LBE and IBBE for high-capacity reversible data hiding in encrypted images, 2024, 216, 01651684, 109299, 10.1016/j.sigpro.2023.109299 | |
4. | Qiuling Wu, Dandan Huang, Jiangchun Wei, Wenhui Chen, Adaptive and blind audio watermarking algorithm based on dither modulation and butterfly optimization algorithm, 2023, 20, 1551-0018, 11482, 10.3934/mbe.2023509 | |
5. | Chuanda Cai, Changgen Peng, Jin Niu, Weijie Tan, Hanlin Tang, Low distortion reversible database watermarking based on hybrid intelligent algorithm, 2023, 20, 1551-0018, 21315, 10.3934/mbe.2023943 |
Size / pixels | code | Size / pixels | code |
4×4 | 000 | 64×64 | 100 |
8×8 | 001 | 128×128 | 101 |
16×16 | 010 | 256×256 | 110 |
32×32 | 011 | 512×512 | 111 |
Image (512 × 512 pixels) | Lena | Barbara | Baboon | Peppers | Girl | Cameraman | Man | Couple |
NC | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
Image name | Proposed algorithm | literature 17 | literature 10 | |||
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |
Lena | 54.63 | 0.989 | 46.72 | 0.982 | 52.18 | 0.984 |
Baboon | 52.36 | 0.987 | 48.37 | 0.985 | 51.37 | 0.988 |
Barbara | 53.51 | 0.993 | 47.54 | 0.983 | 51.93 | 0.986 |
Peppers | 53.14 | 0.991 | 46.81 | 0.981 | 51.73 | 0.987 |
Girl | 54.27 | 0.989 | 47.13 | 0.986 | 52.43 | 0.987 |
Cameraman | 53.69 | 0.991 | 46.92 | 0.982 | 51.52 | 0.985 |
Man | 52.73 | 0.988 | 47.98 | 0.984 | 51.49 | 0.987 |
Couple | 53.12 | 0.990 | 46.85 | 0.983 | 52.17 | 0.986 |
Image name | Original images | Watermarked images | Original watermark | Extracted watermark | PSNR |
Lena | ![]() |
![]() |
![]() |
![]() |
54.63 |
Baboon | ![]() |
![]() |
![]() |
![]() |
52.36 |
Barbara | ![]() |
![]() |
![]() |
![]() |
53.51 |
Peppers | ![]() |
![]() |
![]() |
![]() |
53.14 |
Girl | ![]() |
![]() |
![]() |
![]() |
54.27 |
Cameraman | ![]() |
![]() |
![]() |
![]() |
53.69 |
Man | ![]() |
![]() |
![]() |
![]() |
52.73 |
Couple | ![]() |
![]() |
![]() |
![]() |
53.12 |
Image name | Algorithms | Embedding capacity (bit) | SSIM | PSNR | ||||
10% | 30% | 70% | 90% | 100% | ||||
Lena | Literature 18 | 276,599 | 0.7331 | 54.37 | 42.92 | 36.83 | 32.91 | 31.95 |
Literature 17 | 9767 | 0.9188 | 45.78 | 43.48 | 41.58 | 39.78 | 38.64 | |
Proposed | 236,432 | 0.7935 | 48.65 | 45.24 | 38.91 | 37.05 | 36.34 | |
Baboon | Literature 18 | 41,519 | 0.8232 | 51.25 | 42.43 | 34.94 | 33.48 | 31.02 |
Literature 17 | 11,056 | 0.9047 | 45.86 | 43.84 | 41.95 | 40.56 | 39.76 | |
Proposed | 224,512 | 0.8217 | 47.44 | 44.65 | 38.77 | 37.21 | 36.42 | |
Barbara | Literature 18 | 392,740 | 0.6724 | 50.12 | 48.54 | 46.80 | 45.11 | 31.65 |
Literature 17 | 13,232 | 0.8931 | 45.66 | 42.89 | 39.54 | 37.10 | 36.65 | |
Proposed | 232,864 | 0.8011 | 47.33 | 44.14 | 38.47 | 37.28 | 36.17 | |
Peppers |
Literature 18 | 138,695 | 0.8154 | 52.62 | 42.98 | 35.12 | 33.58 | 31.53 |
Literature 17 | 8562 | 0.9369 | 46.24 | 44.77 | 41.82 | 39.54 | 38.26 | |
Proposed | 232,434 | 0.8115 | 48.13 | 44.35 | 38.54 | 36.79 | 35.84 | |
Girl | Literature 18 | 293,674 | 0.7323 | 53.14 | 42.17 | 36.32 | 32.33 | 31.35 |
Literature 17 | 14,876 | 0.9157 | 45.31 | 42.86 | 41.03 | 39.25 | 38.12 | |
Proposed | 224,796 | 0.7938 | 48.09 | 44.71 | 38.20 | 36.51 | 35.96 | |
Cameraman | Literature 18 | 179,643 | 0.8123 | 52.32 | 42.36 | 34.42 | 33.97 | 31.02 |
Literature 17 | 12,392 | 0.9219 | 46.03 | 44.10 | 41.25 | 39.05 | 37.72 | |
Proposed | 236,562 | 0.8112 | 47.67 | 43.82 | 38.09 | 36.29 | 35.33 | |
Man | Literature 18 | 159,427 | 0.8032 | 51.47 | 41.49 | 33.35 | 32.99 | 30.42 |
Literature 17 | 10,074 | 0.9182 | 45.91 | 43.85 | 41.07 | 38.79 | 37.41 | |
Proposed | 224,862 | 0.8112 | 47.67 | 43.82 | 38.09 | 36.29 | 35.33 | |
Couple | Literature 18 | 233,912 | 0.7638 | 52.38 | 42.67 | 34.84 | 33.37 | 31.22 |
Literature 17 | 12,128 | 0.9298 | 46.11 | 44.23 | 41.54 | 39.25 | 37.83 | |
Proposed | 235,956 | 0.8231 | 47.98 | 44.09 | 38.26 | 36.48 | 35.53 |
Evaluating indicators | Images | Algorithms | |||
Literature 19 | Literature 20 | Literature 21 | Proposed | ||
PSNR (watermarked images) | Lena | 46.03 | 50.60 | 48.05 | 54.63 |
Baboon | 41.22 | 48.64 | 49.22 | 52.36 | |
Barbara | 46.15 | 47.22 | 49.01 | 53.51 | |
Peppers | 42.05 | 41.31 | 48.20 | 53.14 | |
Girl | 45.71 | 49.63 | 48.41 | 54.27 | |
Cameraman | 43.35 | 43.29 | 48.77 | 53.69 | |
Man | 43.28 | 43.25 | 48.59 | 52.73 | |
Couple | 42.11 | 41.25 | 48.08 | 53.12 | |
SSIM (original image and recovered image after extracting) | Lena | 1 | 0.9975 | 0.9945 | 1 |
Baboon | 0.9999 | 0.9995 | 0.9994 | 1 | |
Barbara | 0.9999 | 0.9995 | 0.9982 | 1 | |
Peppers | 0.9999 | 0.9954 | 0.9964 | 1 | |
Girl | 1 | 0.9987 | 0.9954 | 1 | |
Cameraman | 0.9999 | 0.9969 | 0.9968 | 1 | |
Man | 0.9999 | 0.9973 | 0.9965 | 1 | |
Couple | 0.9999 | 0.9964 | 0.9962 | 1 | |
SSIM (original watermark and extracted watermark) | Lena | 1 | 0.9905 | 0.9835 | 1 |
Baboon | 1 | 0.9738 | 0.9978 | 1 | |
Barbara | 1 | 0.9676 | 0.9845 | 1 | |
Peppers | 1 | 0.9655 | 0.9878 | 1 | |
Girl | 1 | 0.9836 | 0.9803 | 1 | |
Cameraman | 1 | 0.9752 | 0.9845 | 1 | |
Man | 1 | 0.9749 | 0.9842 | 1 | |
Couple | 1 | 0.9652 | 0.9875 | 1 |
Embedding rate | Original images | Lena | Baboon | Barbara | Peppers | Girl | Cameraman | Man | Couple |
0.5 bpp | Literature 22 | 42.5 | 33.9 | 42.8 | -- | -- | -- | -- | -- |
Literature 23 | 42.7 | 38.3 | 42.9 | -- | 41.5 | 41.1 | 40.9 | -- | |
Proposed | 41.5 | 40.9 | 41.8 | 41.4 | 41.4 | 41.2 | 41.1 | 41.4 | |
1.0 bpp | Literature 22 | 33.2 | -- | 34.3 | -- | -- | 33.7 | -- | -- |
Literature 23 | 34.7 | -- | 35.2 | -- | 34.9 | -- | -- | -- | |
Proposed | 36.3 | 36.2 | 36.4 | 35.8 | 36.3 | 35.9 | 35.8 | 35.9 | |
1.2 bpp | Literature 22 | -- | -- | 30.9 | -- | -- | -- | -- | -- |
Literature 23 | 32.1 | -- | 32.8 | -- | 31.8 | 32.5 | 32.4 | -- | |
Proposed | 33.2 | 32.8 | 33.4 | 32.9 | 33.2 | 32.8 | 32.9 | 32.8 |
Size / pixels | code | Size / pixels | code |
4×4 | 000 | 64×64 | 100 |
8×8 | 001 | 128×128 | 101 |
16×16 | 010 | 256×256 | 110 |
32×32 | 011 | 512×512 | 111 |
Image (512 × 512 pixels) | Lena | Barbara | Baboon | Peppers | Girl | Cameraman | Man | Couple |
NC | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
Image name | Proposed algorithm | literature 17 | literature 10 | |||
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |
Lena | 54.63 | 0.989 | 46.72 | 0.982 | 52.18 | 0.984 |
Baboon | 52.36 | 0.987 | 48.37 | 0.985 | 51.37 | 0.988 |
Barbara | 53.51 | 0.993 | 47.54 | 0.983 | 51.93 | 0.986 |
Peppers | 53.14 | 0.991 | 46.81 | 0.981 | 51.73 | 0.987 |
Girl | 54.27 | 0.989 | 47.13 | 0.986 | 52.43 | 0.987 |
Cameraman | 53.69 | 0.991 | 46.92 | 0.982 | 51.52 | 0.985 |
Man | 52.73 | 0.988 | 47.98 | 0.984 | 51.49 | 0.987 |
Couple | 53.12 | 0.990 | 46.85 | 0.983 | 52.17 | 0.986 |
Image name | Original images | Watermarked images | Original watermark | Extracted watermark | PSNR |
Lena | ![]() |
![]() |
![]() |
![]() |
54.63 |
Baboon | ![]() |
![]() |
![]() |
![]() |
52.36 |
Barbara | ![]() |
![]() |
![]() |
![]() |
53.51 |
Peppers | ![]() |
![]() |
![]() |
![]() |
53.14 |
Girl | ![]() |
![]() |
![]() |
![]() |
54.27 |
Cameraman | ![]() |
![]() |
![]() |
![]() |
53.69 |
Man | ![]() |
![]() |
![]() |
![]() |
52.73 |
Couple | ![]() |
![]() |
![]() |
![]() |
53.12 |
Image name | Algorithms | Embedding capacity (bit) | SSIM | PSNR | ||||
10% | 30% | 70% | 90% | 100% | ||||
Lena | Literature 18 | 276,599 | 0.7331 | 54.37 | 42.92 | 36.83 | 32.91 | 31.95 |
Literature 17 | 9767 | 0.9188 | 45.78 | 43.48 | 41.58 | 39.78 | 38.64 | |
Proposed | 236,432 | 0.7935 | 48.65 | 45.24 | 38.91 | 37.05 | 36.34 | |
Baboon | Literature 18 | 41,519 | 0.8232 | 51.25 | 42.43 | 34.94 | 33.48 | 31.02 |
Literature 17 | 11,056 | 0.9047 | 45.86 | 43.84 | 41.95 | 40.56 | 39.76 | |
Proposed | 224,512 | 0.8217 | 47.44 | 44.65 | 38.77 | 37.21 | 36.42 | |
Barbara | Literature 18 | 392,740 | 0.6724 | 50.12 | 48.54 | 46.80 | 45.11 | 31.65 |
Literature 17 | 13,232 | 0.8931 | 45.66 | 42.89 | 39.54 | 37.10 | 36.65 | |
Proposed | 232,864 | 0.8011 | 47.33 | 44.14 | 38.47 | 37.28 | 36.17 | |
Peppers |
Literature 18 | 138,695 | 0.8154 | 52.62 | 42.98 | 35.12 | 33.58 | 31.53 |
Literature 17 | 8562 | 0.9369 | 46.24 | 44.77 | 41.82 | 39.54 | 38.26 | |
Proposed | 232,434 | 0.8115 | 48.13 | 44.35 | 38.54 | 36.79 | 35.84 | |
Girl | Literature 18 | 293,674 | 0.7323 | 53.14 | 42.17 | 36.32 | 32.33 | 31.35 |
Literature 17 | 14,876 | 0.9157 | 45.31 | 42.86 | 41.03 | 39.25 | 38.12 | |
Proposed | 224,796 | 0.7938 | 48.09 | 44.71 | 38.20 | 36.51 | 35.96 | |
Cameraman | Literature 18 | 179,643 | 0.8123 | 52.32 | 42.36 | 34.42 | 33.97 | 31.02 |
Literature 17 | 12,392 | 0.9219 | 46.03 | 44.10 | 41.25 | 39.05 | 37.72 | |
Proposed | 236,562 | 0.8112 | 47.67 | 43.82 | 38.09 | 36.29 | 35.33 | |
Man | Literature 18 | 159,427 | 0.8032 | 51.47 | 41.49 | 33.35 | 32.99 | 30.42 |
Literature 17 | 10,074 | 0.9182 | 45.91 | 43.85 | 41.07 | 38.79 | 37.41 | |
Proposed | 224,862 | 0.8112 | 47.67 | 43.82 | 38.09 | 36.29 | 35.33 | |
Couple | Literature 18 | 233,912 | 0.7638 | 52.38 | 42.67 | 34.84 | 33.37 | 31.22 |
Literature 17 | 12,128 | 0.9298 | 46.11 | 44.23 | 41.54 | 39.25 | 37.83 | |
Proposed | 235,956 | 0.8231 | 47.98 | 44.09 | 38.26 | 36.48 | 35.53 |
Evaluating indicators | Images | Algorithms | |||
Literature 19 | Literature 20 | Literature 21 | Proposed | ||
PSNR (watermarked images) | Lena | 46.03 | 50.60 | 48.05 | 54.63 |
Baboon | 41.22 | 48.64 | 49.22 | 52.36 | |
Barbara | 46.15 | 47.22 | 49.01 | 53.51 | |
Peppers | 42.05 | 41.31 | 48.20 | 53.14 | |
Girl | 45.71 | 49.63 | 48.41 | 54.27 | |
Cameraman | 43.35 | 43.29 | 48.77 | 53.69 | |
Man | 43.28 | 43.25 | 48.59 | 52.73 | |
Couple | 42.11 | 41.25 | 48.08 | 53.12 | |
SSIM (original image and recovered image after extracting) | Lena | 1 | 0.9975 | 0.9945 | 1 |
Baboon | 0.9999 | 0.9995 | 0.9994 | 1 | |
Barbara | 0.9999 | 0.9995 | 0.9982 | 1 | |
Peppers | 0.9999 | 0.9954 | 0.9964 | 1 | |
Girl | 1 | 0.9987 | 0.9954 | 1 | |
Cameraman | 0.9999 | 0.9969 | 0.9968 | 1 | |
Man | 0.9999 | 0.9973 | 0.9965 | 1 | |
Couple | 0.9999 | 0.9964 | 0.9962 | 1 | |
SSIM (original watermark and extracted watermark) | Lena | 1 | 0.9905 | 0.9835 | 1 |
Baboon | 1 | 0.9738 | 0.9978 | 1 | |
Barbara | 1 | 0.9676 | 0.9845 | 1 | |
Peppers | 1 | 0.9655 | 0.9878 | 1 | |
Girl | 1 | 0.9836 | 0.9803 | 1 | |
Cameraman | 1 | 0.9752 | 0.9845 | 1 | |
Man | 1 | 0.9749 | 0.9842 | 1 | |
Couple | 1 | 0.9652 | 0.9875 | 1 |
Embedding rate | Original images | Lena | Baboon | Barbara | Peppers | Girl | Cameraman | Man | Couple |
0.5 bpp | Literature 22 | 42.5 | 33.9 | 42.8 | -- | -- | -- | -- | -- |
Literature 23 | 42.7 | 38.3 | 42.9 | -- | 41.5 | 41.1 | 40.9 | -- | |
Proposed | 41.5 | 40.9 | 41.8 | 41.4 | 41.4 | 41.2 | 41.1 | 41.4 | |
1.0 bpp | Literature 22 | 33.2 | -- | 34.3 | -- | -- | 33.7 | -- | -- |
Literature 23 | 34.7 | -- | 35.2 | -- | 34.9 | -- | -- | -- | |
Proposed | 36.3 | 36.2 | 36.4 | 35.8 | 36.3 | 35.9 | 35.8 | 35.9 | |
1.2 bpp | Literature 22 | -- | -- | 30.9 | -- | -- | -- | -- | -- |
Literature 23 | 32.1 | -- | 32.8 | -- | 31.8 | 32.5 | 32.4 | -- | |
Proposed | 33.2 | 32.8 | 33.4 | 32.9 | 33.2 | 32.8 | 32.9 | 32.8 |