
Accurate abdomen tissues segmentation is one of the crucial tasks in radiation therapy planning of related diseases. However, abdomen tissues segmentation (liver, kidney) is difficult because the low contrast between abdomen tissues and their surrounding organs. In this paper, an attention-based deep learning method for automated abdomen tissues segmentation is proposed. In our method, image cropping is first applied to the original images. U-net model with attention mechanism is then constructed to obtain the initial abdomen tissues. Finally, level set evolution which consists of three energy terms is used for optimize the initial abdomen segmentation. The proposed model is evaluated across 470 subsets. For liver segmentation, the mean dice are 96.2 and 95.1% for the FLARE21 datasets and the LiTS datasets, respectively. For kidney segmentation, the mean dice are 96.6 and 95.7% for the FLARE21 datasets and the LiTS datasets, respectively. Experimental evaluation exhibits that the proposed method can obtain better segmentation results than other methods.
Citation: Zhaoxuan Gong, Jing Song, Wei Guo, Ronghui Ju, Dazhe Zhao, Wenjun Tan, Wei Zhou, Guodong Zhang. Abdomen tissues segmentation from computed tomography images using deep learning and level set methods[J]. Mathematical Biosciences and Engineering, 2022, 19(12): 14074-14085. doi: 10.3934/mbe.2022655
[1] | Yu Li, Meilong Zhu, Guangmin Sun, Jiayang Chen, Xiaorong Zhu, Jinkui Yang . Weakly supervised training for eye fundus lesion segmentation in patients with diabetic retinopathy. Mathematical Biosciences and Engineering, 2022, 19(5): 5293-5311. doi: 10.3934/mbe.2022248 |
[2] | Shuai Cao, Biao Song . Visual attentional-driven deep learning method for flower recognition. Mathematical Biosciences and Engineering, 2021, 18(3): 1981-1991. doi: 10.3934/mbe.2021103 |
[3] | Ran Zhou, Yanghan Ou, Xiaoyue Fang, M. Reza Azarpazhooh, Haitao Gan, Zhiwei Ye, J. David Spence, Xiangyang Xu, Aaron Fenster . Ultrasound carotid plaque segmentation via image reconstruction-based self-supervised learning with limited training labels. Mathematical Biosciences and Engineering, 2023, 20(2): 1617-1636. doi: 10.3934/mbe.2023074 |
[4] | XiaoQing Zhang, GuangYu Wang, Shu-Guang Zhao . CapsNet-COVID19: Lung CT image classification method based on CapsNet model. Mathematical Biosciences and Engineering, 2022, 19(5): 5055-5074. doi: 10.3934/mbe.2022236 |
[5] | Kun Lan, Jianzhen Cheng, Jinyun Jiang, Xiaoliang Jiang, Qile Zhang . Modified UNet++ with atrous spatial pyramid pooling for blood cell image segmentation. Mathematical Biosciences and Engineering, 2023, 20(1): 1420-1433. doi: 10.3934/mbe.2023064 |
[6] | Xiaoxu Peng, Heming Jia, Chunbo Lang . Modified dragonfly algorithm based multilevel thresholding method for color images segmentation. Mathematical Biosciences and Engineering, 2019, 16(6): 6467-6511. doi: 10.3934/mbe.2019324 |
[7] | Michael James Horry, Subrata Chakraborty, Biswajeet Pradhan, Maryam Fallahpoor, Hossein Chegeni, Manoranjan Paul . Factors determining generalization in deep learning models for scoring COVID-CT images. Mathematical Biosciences and Engineering, 2021, 18(6): 9264-9293. doi: 10.3934/mbe.2021456 |
[8] | Jiande Zhang, Chenrong Huang, Ying Huo, Zhan Shi, Tinghuai Ma . Multi-population cooperative evolution-based image segmentation algorithm for complex helical surface image. Mathematical Biosciences and Engineering, 2020, 17(6): 7544-7561. doi: 10.3934/mbe.2020385 |
[9] | Tongping Shen, Fangliang Huang, Xusong Zhang . CT medical image segmentation algorithm based on deep learning technology. Mathematical Biosciences and Engineering, 2023, 20(6): 10954-10976. doi: 10.3934/mbe.2023485 |
[10] | Limin Ma, Yudong Yao, Yueyang Teng . Iterator-Net: sinogram-based CT image reconstruction. Mathematical Biosciences and Engineering, 2022, 19(12): 13050-13061. doi: 10.3934/mbe.2022609 |
Accurate abdomen tissues segmentation is one of the crucial tasks in radiation therapy planning of related diseases. However, abdomen tissues segmentation (liver, kidney) is difficult because the low contrast between abdomen tissues and their surrounding organs. In this paper, an attention-based deep learning method for automated abdomen tissues segmentation is proposed. In our method, image cropping is first applied to the original images. U-net model with attention mechanism is then constructed to obtain the initial abdomen tissues. Finally, level set evolution which consists of three energy terms is used for optimize the initial abdomen segmentation. The proposed model is evaluated across 470 subsets. For liver segmentation, the mean dice are 96.2 and 95.1% for the FLARE21 datasets and the LiTS datasets, respectively. For kidney segmentation, the mean dice are 96.6 and 95.7% for the FLARE21 datasets and the LiTS datasets, respectively. Experimental evaluation exhibits that the proposed method can obtain better segmentation results than other methods.
Abdomen organ segmentation is very important for many clinical applications, such as organ quantification and surgical planning. However, manually marking organs from Computed Tomography (CT) images is subjective and sometimes time-consuming for the physician. Automatic and robust abdomen organ extraction from CT images is a difficult work because the low-intensity contrasts between abdomen organ and its neighbors.
Plenty of abdomen segmentation methods have been designed in recent years. For liver segmentation, Elaziz et al. [1] combines intensity analysis and region-growing for automatic segmentation of liver. Rafiei et al. [2] uses an 3D region growing model for liver segmentation, subject-specific conditions are integrated into the region growing model. Tran et al. [3] use a multiple layer u-net model to segment the liver region. Dilated convolution is also applied for the convolution node. Sofia et al. [4] opt for an atlas-based strategy for liver boundary detection. Their method is evaluated on a public dataset, receiving an average dice of 0.94. Song et al. [5]. propose a novel full-context convolution neural network to extract the liver region. Context information and temporal information are combined to construct the network. Zhang et al. [6] propose a novel 3D residual UNet using context information for liver segmentation. The long-range correlations can be obtained from the context information, and the features can be expanded in contextual information. Lei et al. [7] present a new designed deep learning model for liver segmentation. The deformable convolution and the ladder-atrous-spatial-pyramid pooling are combined to learn better context information. Fang et al. [8] propose a deep learning-based method to extract the liver region. The input and output features are designed into one network using multi-scale models to abstract the context information. Amina et al. [9] presents an active contour model to extract the liver region. Gaussian mixture model is applied pre-processing which can segment the initial liver contour. Gradient Vector Flow model is then used for fine segmentation. Noman et al. [10] propose a mask region-deep learning network to extract the liver region, the liver contour can be obtained by the output of the network. Wang et al. [11] propose a shape-based liver segmentation model. In their method, the mark label and a priori shape are combined to obtain the liver region. Qin et al. [12] combine superpixel information and boundary information in a convolutional neural network to segment the liver contour. Superpixels and entropy-based saliency map are combined to obtain the liver region. Yao et al. [13] propose an improved U-Net liver segmentation method. Label fusion and residual connection are combined to segment the liver region. Shao et al. [14] propose a context attention strategy for liver segmentation. The method uses spatial information and channel information to obtain the liver features. Li et al. [15] propose an attention UNet++ method for liver segmentation. They construct the Unet++ model with attention mechanism which can segment the liver region accurately. For kidney segmentation, Yan et al. [16] propose a novel effective hybrid model for kidney segmentation. Sparse point Cloud segmentation networks and contextual information are combined to obtain the kidney region. Thein et al. [17] use preprocessing method to segment the liver region. In their method, region growing is used to acquire the liver region. Yang et al. [18] propose a three-dimensional (3D) fully convolutional network.3D spatial contextual information and pyramid pooling module are combined to acquire the liver contour. Arafat et al. [19] propose a Cascaded Regression deep learning method to acquire the kidney region. In their method, mask-RCNN is used to obtain the kidney region. Chen et al. [20] present a 3D segmentation method for kidney segmentation. A deformable mesh model is constructed to obtain the kidney region. Yin et al. [21] propose a boundary based deep learning method for kidney segmentation. Experimental results have demonstrated the effectiveness of their method. Statkevych et al. [22] use U-net model for kidney segmentation and receive an acceptable result. Li et al. [23] propose an iterative convolution threshold method for kidney segmentation. The iterative convolution threshold method is constructed to minimize the energy function during kidney segmentation, which is significantly improving segmentation efficiency. Les et al. [24] combine the U-Net model and batch-based synthesis to obtain the kidney region. U-Net model is used for initial segmentation, batch-based synthesis is then applied for fine segmentation. Torres et al. [25] propose a fast phase-based approach for kidney segmentation. In their method, the initial kidney region is obtained by a multi-step feature detection method. And an active surface method is designed to fine the final kidney region. Weerasinghe et al. [26] propose a deep learning method for kidney segmentation. Label fusion and 3D B-Mode are combined to acquire the kidney region. Guo et al. [27] propose a residual and attention-based U-net model for kidney segmentation, the output of the network shows the segmentation result. Geethanjali et al. [28] use an attention U-Net model to obtain the kidney region, and receiving an acceptable results.
In this paper, we present a hybrid method which combines deep learning model and level set evolution for abdomen tissues segmentation. An attention-based deep CNN is constructed to obtain the initial abdomen tissues region. Level set evolution is then constructed to refine the segmentation. Figure 1 exhibits the pipeline of our framework.
The choice of the input image is particularly important as it is related to computational reasons and overfitting problem of CNN. Liver occupies parts of the abdomenarea. Therefore, an automatic image cropping method is presented in this section. The cropped images capture enough information but require a significantly smaller amount of memory for storage. These cropped MR images are used as input for training the CNN. Figure 2 shows the process of image cropping. The algorithm of the proposed cropping method is summarized in Algorithm 1.
Image cropping
Input: Initial abdomen image I
Output: The cropped image L
1) Compute the threshold T according to the histograms of I, T is used to distinguish the background and the abdomen region;
2) Acquire the bounding box (redline in Figure 2(a)) of abdomen region according to T;
3) The area of the bounding box is the cropping image L.
The overall architecture of proposed attention-based U-net is exhibited in Figure 3. The model consists of the up-sampling stage and the down-sampling stage. The numbers of feature maps in the down-sampling part of the network are 16, 32, 64,128,256,512. To obtain more context information from the deeper pathways, more features are added in the corresponding layers. The attention module is used to capture multi-scale contextual information. Binary cross entropy is used as the loss function and the stochastic gradient descent (SGD) as the learning optimizer. The initial learning rate is 0.0001 and decreased every 8 epochs. The size of convolutional kernel is 3 × 3.
Li et al. [29] design a Distance Regularized Level Set Evolution (DRLSE)model which consists of two terms: a distance regularization term and an energy constraint term. The proposed deep learning method obtains the initial abdomen tissues region. The boundary of the segmented region can be taken as the zero contour of the level set model. DRLSE model is then applied for fine segmentation. The final abdomen tissues segmentation can be obtained by using the DRLSE model.
Let I denotes an image of domain Ω, and ϖ is defined as
ϖ=11+|∇Gσ∗I|2 | (1) |
where Gσ is a Gaussian kernel with a standard deviation σ.
The DRLSE model consists of three energy terms:
E(ϕ)=αΘ(ϕ)+λΛ(ϕ)+βB(ϕ) | (2) |
where α,λ and β are positive values.
The energy functional Λ(ϕ), B(ϕ) and Θ(ϕ) are defined by
Λ(ϕ)=∫Ωϖδ(ϕ)|∇ϕ|dx | (3) |
B(ϕ)=∫ΩϖH(−ϕ)dx | (4) |
Θ(ϕ)=∫Ωp(|∇ϕ|)dx | (5) |
where the delta function and the Heaviside function are denoted as ℓ and T, respectively, the potential function p is defined as:p(s)=s2. Θ(ϕ) is used to maintain the signed distance property in the entire domain. The energy Λ(ϕ) is minimized when the zero-level contour achieves boundaries of the object. B(ϕ) can accelerate the motion of the zero-level contour during the evolution process, which is important for improving the segmentation accuracy.
T(⋅) and ℓ(⋅) are defined as:
{Tε(x)=12(1+2πarctan(xε))ℓε(x)=1πεx2+ε2 | (6) |
The final energy function of DRLSE model is formulated as follows:
E(ϕ)=λ∫Ωϖℓ(ϕ)|∇ϕ|dx+β∫ΩϖT(−ϕ)dx+α∫Ωp(|∇ϕ|)dx | (7) |
The gradient flow of the energy functional (7) is:
∂ϕ∂t=αdiv(dp(|∇ϕ|)∇ϕ)+λδε(ϕ)div(ϖ∇ϕ|∇ϕ|)+βϖℓε(ϕ) | (8) |
Our method has been validated on the LiTS dataset and FLARE21 dataset. LiTS provides liver dataset which is publicly available. 110 subsets are applied for training and 20 subsets are applied for testing. FLARE21 provides both liver and kidney data. 360 subsets are applied for training and 50 subsets are applied for testing. The parameters of the DRLSE model are set as follows: α = 1, λ = 1, β = 1, ε = 1.5, The experiment was evaluated on a Windows 10 server with an i7-11800H CPU (2.3 GHz, and 16 memory) and Nvidia GPU GeForce 3060.
Figure 4 exhibits a liver segmentation result and a kidney segmentation result by the proposed method. Figure 4 (a), (c) are the segmentation results obtained by our method. Figure 4 (b), (d) are the corresponding manual labels. We can see from the picture that our proposed model are most similar to that from the ground truth.
The performance of CNN+DRLSE (Figure 5 red line) with CNN (Figure 5 green line) are compared on the same dataset. Several slices are selected randomly. From the picture we can see that CNN model produces poor segmentations at the depression area of the liver image (Figure 5 the first row) on three directions (axial, sagittal and coronal), it also produces many noise points on the kidney region (Figure 5 the second row). The CNN+DRLSE method has more accurate contouring results of both liver and kidney in comparison to CNN method.
To evaluate the segmentation techniques, we chose four metrics to evaluate the algorithm performance between different segmentation results and the ground true, namely Dice Coefficient (DC), Positive Predictive Value rate (PPV), Jacard Index (JI) and Mean Surface Distance (MSD). The segmentation results are better when the MSD achieves smaller value. And the larger the remaining three values are, the better the segmentation results will be.
We compare the proposed method with other four level set methods: CV model [30], DRLSE [29] model, LINC [31] model and LBF [32] model. To validate the capability of DRLSE, we first applied it to the liver images. As shown in Figure 6, the median dice values reach 0.963 for DRLSE, followed by 0.877 for LINC, 0.832 for LBF and 0.782 for CV. We then applied DRLSE model to the kidney images. The median dice values reach 0.964 for DRLSE, followed by 0.883 for LINC, 0.848 for LBF and 0.822 for CV.
Table 1 exhibits the quantitative results of our method. First, we validate our method on the LiTS dataset, the average DC are 95.1 and 95.7%, the JI values are 89.4 and 88.9%, the PPV values are 94.6 and 93.7 %, and the MSD values are 9.13 and 10.21 for the liver and the kidney, respectively. Then, we validate our method on the FLARE21 dataset, the average DC are 96.2 and 96.6%, the JI values are 90.5 and 91.8%, the PPV values are 95.5 and 96.1 %, and the MSD values are 8.33 and 8.02 for the liver and the kidney, respectively. Generally, the proposed method can segment the liver and the kidney more accurately, which is useful for clinical applications.
Type | LiTS | FLARE21 | ||||||
DC | JI | PPV | MSD | DC | JI | PPV | MSD | |
Liver | 0.951 | 0.894 | 0.946 | 9.13 | 0.962 | 0.905 | 0.955 | 8.33 |
Kidney | 0.957 | 0.889 | 0.937 | 10.21 | 0.966 | 0.918 | 0.961 | 8.02 |
As shown in Table 2, different network are evaluated. We validate the experiment on liver images from both the LiTS dataset and the FLARE21 dataset. A lower MSD values indicates a better match between a segmentation result and the ground truth. The comparison of the values of these metrics shows that Attention U-net+DRLSE gave more robust performance, which matched with the ground truth better than segmentation results of U-net, U-net++, Attention U-net and Attention U-net++.
methods | LiTS | FLARE21 | ||||||
DC | MSD | JI | PPV | DC | MSD | JI | PPV | |
U-net | 0.931 | 12.21 | 0.82 | 0.931 | 0.928 | 11.94 | 0.85 | 0.939 |
U-net++ | 0.937 | 11.49 | 0.831 | 0.935 | 0.941 | 10.88 | 0.852 | 0.941 |
Attention U-net++ | 0.942 | 10.54 | 0.895 | 0.948 | 0.949 | 10.22 | 0.902 | 0.95 |
Attention U-net | 0.94 | 10.98 | 0.881 | 0.942 | 0.944 | 10.16 | 0.892 | 0.944 |
Attention U-net+DRLSE | 0.957 | 9.11 | 0.921 | 0.952 | 0.964 | 8.35 | 0.9 | 0.953 |
In this paper, we proposed a combination of CNN and level set framework for abdomen segmentation. Image cropping is first used to resize the target region. CNN is then constructed to generate an initial label for the abdomen region. Finally, level set evolution is constructed to fine the segmentation. The experimental results show that compared with other CNN-based networks, the proposed method can improve the segmentation results more accurately. The level set model performs well on 2D images. In the future, we will extend the proposed method on 3D images. It will also be integrated into the ITK & VTK based software which can help the doctors to diagnose the related abdominal diseases.
This work was supported in part by National Natural Science Foundation of China (No.61971118), National Natural Science Foundation of Liaoning (No. 2020-MS-239). Foundation of Liaoning Education Department (No. LJKZ0210, No. JYT19040). Aviation Science Foundation (2019ZE054009). Outsourcing project of Northeastern University (220120099). Guodong Zhang and Wei Guo contributed equally to this work, and should be regarded as joint corresponding authors.
The authors declare there is no conflict of interest.
[1] | O. Abd-Elaziz, M. Sayed, M. Abdullah, Liver tumors segmentation from abdominal CT images using region growing and morphological processing, in International Conference on Engineering and Technology (ICET), 2015. https://doi.org/10.1109/ICEngTechnol.2014.7016813 |
[2] | S. Rafiei, N. Karimi, B. Mirmahboub, K. Najarian, S. Soroushmehr, Liver segmentation in abdominal CT images using probabilistic atlas and adaptive 3D region growing, in 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2019. https://doi.org/10.1109/EMBC.2019.8857835 |
[3] |
S. Tran, C. Cheng, D. Liu, A multiple layer U-Net, Un-Net, for liver and liver tumor segmentation in CT, IEEE Access, 9 (2020), 3752–3764. https://doi.org/10.1109/ACCESS.2020.3047861 doi: 10.1109/ACCESS.2020.3047861
![]() |
[4] |
P. Sofia, A. Juan, S. Manuel, A. Roberto, M. Alicia, D. Maceira, Automatic multi-atlas liver segmentation and couinaud classification from CT volumes, Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., 2021 (2021), 2826–2829. https://doi.org/10.1109/EMBC46164.2021.9630668 doi: 10.1109/EMBC46164.2021.9630668
![]() |
[5] |
L. Song, H. Wang, Z. Wang, Bridging the gap between 2D and 3D contexts in CT volume for liver and tumor segmentation, IEEE J. Biomed. Health Inf., 9 (2021), 3450–3459. https://doi.org/10.1109/JBHI.2021.3075752 doi: 10.1109/JBHI.2021.3075752
![]() |
[6] | J. Zhang, B. Ji, Z. Jiang, J. Qin, CR-UNet: Context-rich UNet for liver segmentation from CT volumes, in 2021 International Conference on Electronic Information Engineering and Computer Science (EIECS), 2021. https://doi.org/10.1109/EIECS53707.2021.9588086 |
[7] |
T. Lei, R. Wang, Y. Zhang, A. Nandi, DefED-Net: Deformable encoder-decoder network for liver and liver tumor segmentation, IEEE Trans. Radiat. Plasma Med. Sci., 6 (2022), 68–78. https://doi.org/10.1109/TRPMS.2021.3059780 doi: 10.1109/TRPMS.2021.3059780
![]() |
[8] |
X. Fang, S. Xu, B. Wood, P. Yan, Deep learning-based liver segmentation for fusion-guided intervention, Int. J. Comput. Assisted Radiol. Surg., 15 (2020), 963–972. https://doi.org/10.1007/s11548-020-02147-6 doi: 10.1007/s11548-020-02147-6
![]() |
[9] | T. Amina, L. Lakhdar, B. Hakim, M. Abdallah, Improved active contour model through automatic initialisation: Liver segmentation, in 2021 IEEE 1st International Maghreb Meeting of the Conference on Sciences and Techniques of Automatic Control and Computer Engineering MI-STA, 2021. https://doi.org/10.1109/MI-STA52233.2021.9464516 |
[10] | M. N. U. Haq, A. Irtaza, N. Nida, M. A. Shah, L. Zubair, Liver tumor segmentation using resnet based mask-R-CNN, in 2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST), 2021. https://doi.org/10.1109/IBCAST51254.2021.9393194 |
[11] |
X. Wang, Y. Zheng, G. Lan, W. Xuan, X. Sang, X. Kong, et al., Liver segmentation from CT images using a sparse priori statistical shape model (SP-SSM), PLoS One, 12 (2017), 1–23. https://doi.org/10.1371/journal.pone.0185249 doi: 10.1371/journal.pone.0185249
![]() |
[12] |
W. Qin, J. Wu, F. Han, Y. Yuan, W. Zhao, B. Ibragimov, et al., Superpixel-based and boundary-sensitive convolutional neural network for automated liver segmentation, Phys. Med. Biol., 9 (2018), 1–19. https://doi.org/10.1088/1361-6560/aabd19 doi: 10.1088/1361-6560/aabd19
![]() |
[13] | Y. Yao, Y. Sang, Z. Zhao, Y. Cao, Research on segmentation and recognition of liver CT image based on multi-scale feature fusion, in 2021 2nd International Symposium on Computer Engineering and Intelligent Communications (ISCEIC), 2021. https://doi.org/10.1109/ISCEIC53685.2021.00075 |
[14] | S. Shao, X. Zhang, R. Cheng, C. Deng, Semantic segmentation method of 3D liver image based on contextual attention model, in 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2021. https://doi.org/10.1109/SMC52423.2021.9659018 |
[15] | C. Li, Y. Tan, W. Chen, X. Luo, Y. Gao, X. Jia, et al., Attention Unet++: A nested attention-aware U-Net for liver CT image segmentation, in 2020 IEEE International Conference on Image Processing (ICIP), 2020. https://doi.org/10.1109/ICIP40778.2020.9190761 |
[16] | X. Yan, K. Yuan, W. Zhao, S. Wang, Z. Li, S. Cui, An efficient hybrid model for kidney tumor segmentation in CT images, in 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), 2020. https://doi.org/10.1109/ISBI45749.2020.9098325 |
[17] | N. Thein, A. Nugroho, T. Bharata, K Hamamoto, An image preprocessing method for kidney stone segmentation in CT scan images, in 2018 International Conference on Computer Engineering, Network and Intelligent Multimedia (CENIM), 2018. https://doi.org/10.1109/CENIM.2018.8710933 |
[18] | G. Yang, G. Li, T. Pan, Y. Kong, X. Zhu, Automatic segmentation of kidney and renal tumor in CT images based on 3D fully convolutional neural network with pyramid pooling module, in 2018 24th International Conference on Pattern Recognition (ICPR), 2018. https://doi.org/10.1109/ICPR.2018.8545143 |
[19] |
M. Arafat, G. Hamarne, R. Garbi, Cascaded regression neural nets for kidney localization and segmentation-free volume estimation, IEEE Trans. Med. Imaging, 40 (2021), 1555–1567. https://doi.org/10.1109/TMI.2021.3060465 doi: 10.1109/TMI.2021.3060465
![]() |
[20] | J. Chen, X. Zhang, J. Wang, Coarse-to-fine deformable model-based kidney 3D segmentation, in 2019 WRC Symposium on Advanced Robotics and Automation (WRC SARA), 2019. https://doi.org/10.1109/WRC-SARA.2019.8931969 |
[21] | S. Yin, Z. Zhang, H. Li, Q. Peng, X. You, S. L. Furth, et al., Fully-automatic segmentation of kidneys in clinical ultrasound images using a boundary distance regression network, in 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), 2019. https://doi.org/10.1109/ISBI.2019.8759170 |
[22] | R. Statkevych, S. Stirenko, Y. Gordienko, Human kidney tissue image segmentation by U-Net models, in IEEE EUROCON 2021-19th International Conference on Smart Technologies, 2021. https://doi.org/10.1109/EUROCON52738.2021.9535599 |
[23] | M. Li, Y. Chen, X. Zheng, K. Liu, Kidney region of interest extraction based on iterative convolution threshold method, in 2021 6th International Conference on Communication, Image and Signal Processing (CCISP), 2021. https://doi.org/10.1109/CCISP52774.2021.9639090 |
[24] |
T. Les, T. Markiewcz, M. Dziekiewicz, M. Lorent, Kidney segmentation from computed tomography images using U-Net and batch-based synthesis, Comput. Biol. Med., 123 (2020), 103906. https://doi.org/10.1016/j.compbiomed.2020.103906 doi: 10.1016/j.compbiomed.2020.103906
![]() |
[25] |
H. R. Torres, S. Queirós, P. Morais, B. Oliveira, J. Vilaca, Kidney segmentation in 3-D ultrasound images using a fast phase-based approach, IEEE Trans. Ultrason., Ferroelectr., Freq. Control, 68 (2021), 1521–1531. https://doi.org/10.1109/TUFFC.2020.3039334 doi: 10.1109/TUFFC.2020.3039334
![]() |
[26] |
N. Weerasinghe, N. Lovell, A. Welsh, G. Stevenson, Multi-parametric fusion of 3D power doppler ultrasound for fetal kidney segmentation using fully convolutional neural networks, IEEE J. Biomed. Health Inf., 25 (2021), 2050–2057. https://doi.org/10.1109/JBHI.2020.3027318 doi: 10.1109/JBHI.2020.3027318
![]() |
[27] | J. Guo, W. Zeng, S. Yu, J. Xiao, RAU-Net: U-Net model based on residual and attention for kidney and kidney tumor segmentation, in 2021 IEEE International Conference on Consumer Electronics and Computer Engineering (ICCECE), 2021. https://doi.org/10.1109/ICCECE51280.2021.9342530 |
[28] | T. M. Geethanjali, Minavathi, M. S. Dinesh, Semantic segmentation of tumors in kidneys using attention U-Net models, international conference on electrical, in 2021 5th International Conference on Electrical, Electronics, Communication, Computer Technologies and Optimization Techniques (ICEECCOT), 2021. https://doi.org/10.1109/ICEECCOT52851.2021.9708025 |
[29] |
C. Li, C. Xu, C. Gui, M. Fox, Distance regularized level set evolution and its application to image segmentation, IEEE Trans. Image Process., 19 (2010), 3243–3254. https://doi.org/10.1109/TIP.2010.2069690 doi: 10.1109/TIP.2010.2069690
![]() |
[30] |
T. Chan, L. Vese, Active contours without edges, IEEE Trans. Image Process., 10 (2001), 266–277. https://doi.org/10.1109/83.902291 doi: 10.1109/83.902291
![]() |
[31] |
C. Feng, D. Zhao, M. Huang, Image segmentation and bias correction using local inhomogeneous iNtensity clustering (LINC): A region-based level set method, Neurocomputing, 219 (2017), 107–129. https://doi.org/10.1016/j.neucom.2016.09.008 doi: 10.1016/j.neucom.2016.09.008
![]() |
[32] |
C. Li, C. Kao, J. Gore, Z. Ding, Minimization of region scalable fitting energy for image segmentation, IEEE Trans. Image Process., 17 (2008), 1940–1949. https://doi.org/10.1109/TIP.2008.2002304 doi: 10.1109/TIP.2008.2002304
![]() |
Type | LiTS | FLARE21 | ||||||
DC | JI | PPV | MSD | DC | JI | PPV | MSD | |
Liver | 0.951 | 0.894 | 0.946 | 9.13 | 0.962 | 0.905 | 0.955 | 8.33 |
Kidney | 0.957 | 0.889 | 0.937 | 10.21 | 0.966 | 0.918 | 0.961 | 8.02 |
methods | LiTS | FLARE21 | ||||||
DC | MSD | JI | PPV | DC | MSD | JI | PPV | |
U-net | 0.931 | 12.21 | 0.82 | 0.931 | 0.928 | 11.94 | 0.85 | 0.939 |
U-net++ | 0.937 | 11.49 | 0.831 | 0.935 | 0.941 | 10.88 | 0.852 | 0.941 |
Attention U-net++ | 0.942 | 10.54 | 0.895 | 0.948 | 0.949 | 10.22 | 0.902 | 0.95 |
Attention U-net | 0.94 | 10.98 | 0.881 | 0.942 | 0.944 | 10.16 | 0.892 | 0.944 |
Attention U-net+DRLSE | 0.957 | 9.11 | 0.921 | 0.952 | 0.964 | 8.35 | 0.9 | 0.953 |
Type | LiTS | FLARE21 | ||||||
DC | JI | PPV | MSD | DC | JI | PPV | MSD | |
Liver | 0.951 | 0.894 | 0.946 | 9.13 | 0.962 | 0.905 | 0.955 | 8.33 |
Kidney | 0.957 | 0.889 | 0.937 | 10.21 | 0.966 | 0.918 | 0.961 | 8.02 |
methods | LiTS | FLARE21 | ||||||
DC | MSD | JI | PPV | DC | MSD | JI | PPV | |
U-net | 0.931 | 12.21 | 0.82 | 0.931 | 0.928 | 11.94 | 0.85 | 0.939 |
U-net++ | 0.937 | 11.49 | 0.831 | 0.935 | 0.941 | 10.88 | 0.852 | 0.941 |
Attention U-net++ | 0.942 | 10.54 | 0.895 | 0.948 | 0.949 | 10.22 | 0.902 | 0.95 |
Attention U-net | 0.94 | 10.98 | 0.881 | 0.942 | 0.944 | 10.16 | 0.892 | 0.944 |
Attention U-net+DRLSE | 0.957 | 9.11 | 0.921 | 0.952 | 0.964 | 8.35 | 0.9 | 0.953 |