Research article Special Issues

TS-GCN: A novel tumor segmentation method integrating transformer and GCN

  • Received: 15 May 2023 Revised: 29 May 2023 Accepted: 30 May 2023 Published: 21 September 2023
  • As one of the critical branches of medical image processing, the task of segmentation of breast cancer tumors is of great importance for planning surgical interventions, radiotherapy and chemotherapy. Breast cancer tumor segmentation faces several challenges, including the inherent complexity and heterogeneity of breast tissue, the presence of various imaging artifacts and noise in medical images, low contrast between the tumor region and healthy tissue, and inconsistent size of the tumor region. Furthermore, the existing segmentation methods may not fully capture the rich spatial and contextual information in small-sized regions in breast images, leading to suboptimal performance. In this paper, we propose a novel breast tumor segmentation method, called the transformer and graph convolutional neural (TS-GCN) network, for medical imaging analysis. Specifically, we designed a feature aggregation network to fuse the features extracted from the transformer, GCN and convolutional neural network (CNN) networks. The CNN extract network is designed for the image's local deep feature, and the transformer and GCN networks can better capture the spatial and context dependencies among pixels in images. By leveraging the strengths of three feature extraction networks, our method achieved superior segmentation performance on the BUSI dataset and dataset B. The TS-GCN showed the best performance on several indexes, with Acc of 0.9373, Dice of 0.9058, IoU of 0.7634, F1 score of 0.9338, and AUC of 0.9692, which outperforms other state-of-the-art methods. The research of this segmentation method provides a promising future for medical image analysis and diagnosis of other diseases.

    Citation: Haiyan Song, Cuihong Liu, Shengnan Li, Peixiao Zhang. TS-GCN: A novel tumor segmentation method integrating transformer and GCN[J]. Mathematical Biosciences and Engineering, 2023, 20(10): 18173-18190. doi: 10.3934/mbe.2023807

    Related Papers:

  • As one of the critical branches of medical image processing, the task of segmentation of breast cancer tumors is of great importance for planning surgical interventions, radiotherapy and chemotherapy. Breast cancer tumor segmentation faces several challenges, including the inherent complexity and heterogeneity of breast tissue, the presence of various imaging artifacts and noise in medical images, low contrast between the tumor region and healthy tissue, and inconsistent size of the tumor region. Furthermore, the existing segmentation methods may not fully capture the rich spatial and contextual information in small-sized regions in breast images, leading to suboptimal performance. In this paper, we propose a novel breast tumor segmentation method, called the transformer and graph convolutional neural (TS-GCN) network, for medical imaging analysis. Specifically, we designed a feature aggregation network to fuse the features extracted from the transformer, GCN and convolutional neural network (CNN) networks. The CNN extract network is designed for the image's local deep feature, and the transformer and GCN networks can better capture the spatial and context dependencies among pixels in images. By leveraging the strengths of three feature extraction networks, our method achieved superior segmentation performance on the BUSI dataset and dataset B. The TS-GCN showed the best performance on several indexes, with Acc of 0.9373, Dice of 0.9058, IoU of 0.7634, F1 score of 0.9338, and AUC of 0.9692, which outperforms other state-of-the-art methods. The research of this segmentation method provides a promising future for medical image analysis and diagnosis of other diseases.



    加载中


    [1] M. H. Yap, G. Pons, J. Marti, S. Ganau, M. Sentis, R. Zwiggelaar, et al., Automated breast ultrasound lesions detection using convolutional neural networks, IEEE J. Biomed. Health Inf., 22 (2018), 1218–1226. https://doi.org/10.1109/JBHI.2017.2731873 doi: 10.1109/JBHI.2017.2731873
    [2] J. Gao, Q. Jiang, B. Zhou, D. Chen, Convolutional neural networks for computer-aided detection or diagnosis in medical image analysis: An overview, Math. Biosci. Eng., 16 (2019), 6536–6561. https://doi.org/10.3934/mbe.2019326 doi: 10.3934/mbe.2019326
    [3] C. Xu, Y. Qi, Y. Wang, M. Lou, J. Pi, Y. Ma, ARF-Net: An adaptive receptive gield network for breast mass segmentation in whole mammograms and ultrasound images, Biomed. Signal Process. Control, 71 (2022), 103178. https://doi.org/10.1016/j.bspc.2021.103178 doi: 10.1016/j.bspc.2021.103178
    [4] Y. Wang, N. Wang, M. Xu, J. Yu, C. Qin, X. Luo, et al., Deeply-supervised networks with threshold loss for cancer detection in automated breast ultrasound, IEEE Trans. Med. Imaging, 39 (2019), 866–876. https://doi.org/10.1109/TMI.2019.2936500 doi: 10.1109/TMI.2019.2936500
    [5] S. Jiang, J. Li, Z. Hua, Transformer with progressive sampling for medical cellular image segmentation, Math. Biosci. Eng., 19 (2022), 12104–12126. https://doi.org/10.3934/mbe.2022563 doi: 10.3934/mbe.2022563
    [6] A. Iqbal, M. Sharif, MDA-Net: Multiscale dual attention-based network for breast lesion segmentation using ultrasound images, J. King Saud Univ. Comput. Inf. Sci., 34 (2022), 7283–7299. http://dx.doi.org/10.1016/j.jksuci.2021.10.002 doi: 10.1016/j.jksuci.2021.10.002
    [7] R. Bi, C. Ji, Z. Yang, M. Qiao, P. Lv, H. Wang, Residual-based attention-Unet combing DAC and RMP modules for automatic liver tumor segmentation in CT, Math. Biosci. Eng., 19 (2022), 4703–4718. https://doi.org/10.3934/mbe.2022219 doi: 10.3934/mbe.2022219
    [8] B. Lei, S. Huang, R. Li, C. Bian, H. Li, Y. H. Chou, et al., Segmentation of breast anatomy for automated whole breast ultrasound images with boundary regularized convolutional encoder-decoder network, Neurocomputing, 321 (2018), 178–186. https://doi.org/10.1016/j.neucom.2018.09.043 doi: 10.1016/j.neucom.2018.09.043
    [9] Y. Ouyang, Z. Zhou, W. Wu, J. Tian, F. Xu, S. Wu, et al., A review of ultrasound detection methods for breast microcalcification, Math. Biosci. Eng., 16 (2019), 1761–1785. https://doi.org/10.3934/mbe.2019085 doi: 10.3934/mbe.2019085
    [10] M. N. S. K. B. Soulami, N. Kaabouch, A. Tamtaoui, Breast cancer: One-stage automated detection, segmentation, and classification of digital mammograms using U-net model based semantic segmentation, Biomed. Signal Process. Control, 2021 (2021), 102481. https://doi.org/10.1016/j.bspc.2021.102481 doi: 10.1016/j.bspc.2021.102481
    [11] Y. Wang, N. Wang, M. Xu, J. Yu, C. Qin, X. Luo, et al., Deeply-supervised networks with threshold loss for cancer detection in automated breast ultrasound, IEEE Trans. Med. Imaging, 39 (2019), 866–876. https://doi.org/10.1109/TMI.2019.2936500 doi: 10.1109/TMI.2019.2936500
    [12] E. H. Houssein, M. M. Emam, A. A. Ali, P. N. Suganthan, Deep and machine learning techniques for medical imaging-based breast cancer: A comprehensive review, Exp. Syst. Appl., 167 (2021), 114161. https://doi.org/10.1016/j.eswa.2020.114161 doi: 10.1016/j.eswa.2020.114161
    [13] M. Xian, Y. Zhang, H. D. Cheng, F. Xu, B. Zhang, J. Ding, Automatic breast ultrasound image segmentation: A survey, Pattern Recognit., 79 (2018), 340–355. https://doi.org/10.1016/j.patcog.2018.02.012 doi: 10.1016/j.patcog.2018.02.012
    [14] Y. Tong, Y. Liu, M. Zhao, L. Meng, J. Zhang, Improved U-net MALF model for lesion segmentation in breast ultrasound images, Biomed. Signal Process. Control, 68 (2021), 102721. https://doi.org/10.1016/j.bspc.2021.102721 doi: 10.1016/j.bspc.2021.102721
    [15] D. Mishra, S. Chaudhury, M. Sarkar, A. S. Soin, Ultrasound image segmentation: A deeply supervised network with attention to boundaries, IEEE Trans. Biomed. Eng., 66 (2018), 1637–1648. https://doi.org/10.1109/TBME.2018.2877577 doi: 10.1109/TBME.2018.2877577
    [16] G. Chen, Y. Dai, J. Zhang, C-Net: Cascaded convolutional neural network with global guidance and refinement residuals for breast ultrasound images segmentation, Comput. Methods Programs Biomed., 2022 (2022), 107086. https://doi.org/10.1016/j.cmpb.2022.107086 doi: 10.1016/j.cmpb.2022.107086
    [17] N. K. Tomar, D. Jha, M. A. Riegler, H. D. Johansen, D. Johansen, J. Rittscher, et al., Fanet: A feedback attention network for improved biomedical image segmentation, Trans. Neural Networks Learn. Syst., 2022 (2022). https://doi.org/10.1109/TNNLS.2022.3159394 doi: 10.1109/TNNLS.2022.3159394
    [18] L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, A. L. Yuille, DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs, Trans. Pattern Anal. Mach. Intell., 2018 (2018), 834–848. https://doi.org/10.1109/TPAMI.2017.2699184 doi: 10.1109/TPAMI.2017.2699184
    [19] Y. Xie, J. Zhang, C. Shen, Y. Xia, Cotr: Efficiently bridging CNN and Transformer for 3d medical image segmentation, in Medical Image Computing and Computer Assisted Intervention MICCAI, (2021), 171–180. https://doi.org/10.1007/978-3-030-87199-4_16
    [20] N. S. Punn, S. Agarwal, RCA-IUnet: A residual cross-spatial attention-guided inception U-Net model for tumor segmentation in breast ultrasound imaging, Mach. Vision Appl., 33 (2022), 1–10. ttps://doi.org/10.1007/s00138-022-01280-3 doi: 10.1007/s00138-021-01257-8
    [21] N. Abraham, N. M. B. T. Khan, A novel focal tversky loss function with improved attention U-Net for lesion segmentation, Int. Symp. Biomed. Imaging, 2019 (2019), 683–687. https://doi.org/10.1109/ISBI.2019.8759329 doi: 10.1109/ISBI.2019.8759329
    [22] W. Jin, T. Derr, Y. Wang, Y. Ma, Z. Liu, J. Tang, Node similarity preserving graph convolutional networks, in Proceedings of the 14th ACM International Conference on Web Search and Data Mining, (2021), 148–156. https://doi.org/10.1145/3437963.3441735
    [23] B. Wu, X. Liang, X. Zheng, Y. Guo, H. Tang, Improving dynamic graph convolutional network with fine-grained attention mechanism, in ICASSP International Conference on Acoustics, Speech and Signal Processing (ICASSP), (2022), 3938–3942. https://doi.org/10.1109/ICASSP43922.2022.9746009
    [24] Y. Lu, Y. Chen, D. Zhao, J. Chen, Graph-FCN for image semantic segmentation, in Advances in Neural Networks–ISNN 2019: 16th International Symposium on Neural Networks, (2019), 97–105. https://doi.org/10.1007/978-3-030-22796-8_11
    [25] Y. Huang, Y. Sugano, Y. Sato, Improving action segmentation via graph-based temporal reasoning, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2020), 14024–14034. https://doi.org/10.1109/CVPR42600.2020.01404
    [26] W. Al-Dhabyani, M. Gomaa, H. Khaled, A. Fahmy, Dataset of breast ultrasound images, Data Brief, 28 (2020), 104863. https://doi.org/10.1016/j.dib.2019.104863 doi: 10.1016/j.dib.2019.104863
    [27] Z. Fu, J. Zhang, R. Luo, Y. Sun, D. Deng, L. Xia, TF-Unet: An automatic cardiac MRI image segmentation method, Math. Biosci. Eng., 19 (2022), 5207–5222. https://doi.org/10.3934/mbe.2022244 doi: 10.3934/mbe.2022244
    [28] X. Xu, M. Zhao, P. Shi, R. Ren, X. He, X. Wei, et al., Crack detection and comparison study based on faster R-CNN and mask R-CNN, Sensors, 22 (2022), 1215. https://doi.org/10.3390/s22031215 doi: 10.3390/s22031215
    [29] L. C. Chen, Y. Zhu, G. Papandreou, F. Schroff, H. Adam, Encoder-decoder with atrous separable convolution for semantic image segmentation, in Proceedings of the European Conference on Computer Vision (ECCV), (2018), 801–818.
    [30] R. Huang, M. Lin, H. Dou, Z. Lin, Q. Ying, X. Jia, et al., Boundary-rendering network for breast lesion segmentation in ultrasound images, Med. Image Anal., 80 (2022), 102478. https://doi.org/10.1016/j.media.2022.102478 doi: 10.1016/j.media.2022.102478
    [31] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, J. Liang, UNet++: A nested U-net architecture for medical image segmentation, in Lecture Notes in Computer Science, (2018), 3–11. http: /dx.doi.org/10.1007/ 978-3-030-00889-51
    [32] E. Sanderson, B. J. Matuszewski, FCN-Transformer feature fusion for polyp segmentation, in Medical Image Understanding and Analysis: 26th Annual Conference, Springer International Publishing, (2022), 892–907. https://doi.org/10.1007/978-3-031-12053-4_65
    [33] X. Feng, T. Wang, X. Yang, M. Zhang, W. Guo, W. Wang, ConvWin-UNet: UNet-like hierarchical vision transformer combined with convolution for medical image segmentation, Math. Biosci. Eng., 20 (2023), 128–144. https://doi.org/10.3934/mbe.2023007 doi: 10.3934/mbe.2023007
    [34] O. Oktay, J. Schlemper, L. L. Folgoc, M. Lee, M. Heinrich, K. Misawa, et al., Attention U-Net: learning where to look for the pancreas, preprint, arXiv: 1804.03999. https://doi.org/10.48550/arXiv.1804.03999
    [35] X. Zhang, K. Liu, K. Zhang, X. Li, Z. Sun, B. Wei, SAMS-Net: Fusion of attention mechanism and multi-scale features network for tumor infiltrating lymphocytes segmentation, Math. Biosci. Eng., 20 (2023), 2964–2979. https://doi.org/10.3934/mbe.2023140 doi: 10.3934/mbe.2023140
    [36] R. K. Meleppat, P. Zhang, M. J. Ju, S. K. K. Manna, Y. Jian, E. N. Pugh, et al., Directional optical coherence tomography reveals melanin concentration-dependent scattering properties of retinal pigment epithelium, J. Biomed. Optics, 24 (2019), 066011. https://doi.org/10.1117/1.JBO.24.6.066011 doi: 10.1117/1.JBO.24.6.066011
    [37] R. K. Meleppat, C. R. Fortenbach, Y. Jian, E. S. Martinez, K. Wagner, B. S. Modjtahedi, et al., In Vivo imaging of retinal and choroidal morphology and vascular plexuses of vertebrates using swept-source optical coherence tomography, Trans. Vis. Sci. Tech., 11 (2022), 11. https://doi.org/10.1167/tvst.11.8.11 doi: 10.1167/tvst.11.8.11
    [38] R. K. Meleppat, K. E. Ronning, S. J. Karlen, K. K. Kothandath, M. E. Burns, E. N. Pugh, et al., In situ morphologic and spectral characterization of retinal pigment epithelium organelles in mice using multicolor confocal fluorescence imaging, Invest. Ophthalmol. Vis. Sci., 61 (2020), 1. https://doi.org/10.1167/iovs.61.13.1 doi: 10.1167/iovs.61.13.1
    [39] J. He, Q. Zhu, K. Zhang, P. Yu, J. Tang, An evolvable adversarial network with gradient penalty for COVID-19 infection segmentation, Appl. Soft Comput., 113 (2021), 107947. https://doi.org/10.1016/j.asoc.2021.107947 doi: 10.1016/j.asoc.2021.107947
    [40] X. Liu, D. Zhang, J. Yao, J. Tang, Transformer and convolutional based dual branch network for retinal vessel segmentation in OCTA images, Biomed. Signal Process. Control, 83 (2023), 104604. https://doi.org/10.1016/j.bspc.2023.104604 doi: 10.1016/j.bspc.2023.104604
    [41] C. Zhao, A. Vij, S. Malhotra, J. Tang, H. Tang, D. Pienta, et al., Automatic extraction and stenosis evaluation of coronary arteries in invasive coronary angiograms, Comput. Biol. Med., 136 (2021), 104667. https://doi.org/10.1016/j.compbiomed.2021.104667 doi: 10.1016/j.compbiomed.2021.104667
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1398) PDF downloads(90) Cited by(1)

Article outline

Figures and Tables

Figures(4)  /  Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog