Research article Special Issues

FM-Unet: Biomedical image segmentation based on feedback mechanism Unet


  • Received: 08 March 2023 Revised: 27 April 2023 Accepted: 05 May 2023 Published: 15 May 2023
  • With the development of deep learning, medical image segmentation technology has made significant progress in the field of computer vision. The Unet is a pioneering work, and many researchers have conducted further research based on this architecture. However, we found that most of these architectures are improvements in the backward propagation and integration of the network, and few changes are made to the forward propagation and information integration of the network. Therefore, we propose a feedback mechanism Unet (FM-Unet) model, which adds feedback paths to the encoder and decoder paths of the network, respectively, to help the network fuse the information of the next step in the current encoder and decoder. The problem of encoder information loss and decoder information shortage can be well solved. The proposed model has more moderate network parameters, and the simultaneous multi-node information fusion can alleviate the gradient disappearance. We have conducted experiments on two public datasets, and the results show that FM-Unet achieves satisfactory results.

    Citation: Lei Yuan, Jianhua Song, Yazhuo Fan. FM-Unet: Biomedical image segmentation based on feedback mechanism Unet[J]. Mathematical Biosciences and Engineering, 2023, 20(7): 12039-12055. doi: 10.3934/mbe.2023535

    Related Papers:

  • With the development of deep learning, medical image segmentation technology has made significant progress in the field of computer vision. The Unet is a pioneering work, and many researchers have conducted further research based on this architecture. However, we found that most of these architectures are improvements in the backward propagation and integration of the network, and few changes are made to the forward propagation and information integration of the network. Therefore, we propose a feedback mechanism Unet (FM-Unet) model, which adds feedback paths to the encoder and decoder paths of the network, respectively, to help the network fuse the information of the next step in the current encoder and decoder. The problem of encoder information loss and decoder information shortage can be well solved. The proposed model has more moderate network parameters, and the simultaneous multi-node information fusion can alleviate the gradient disappearance. We have conducted experiments on two public datasets, and the results show that FM-Unet achieves satisfactory results.



    加载中


    [1] A. Sinha, J. Dolz, Multi-scale self-guided attention for medical image segmentation, IEEE J. Biomed. Health. Inf., 25 (2021), 121–130. https://doi.org/10.1109/JBHI.2020.2986926 doi: 10.1109/JBHI.2020.2986926
    [2] X. Zhang, K. Liu, K. Zhang, X. Li, Z. Sun, B. Wei, SAMS-Net: Fusion of attention mechanism and multi-scale features network for tumor infiltrating lymphocytes segmentation, Math. Biosci. Eng., 20 (2023), 2964–2979. https://doi.org/10.3934/mbe.2023140 doi: 10.3934/mbe.2023140
    [3] J. Cheng, S. Tian, L. Yu, C. Gao, X. Kang, X. Ma, et al., ResGANet: Residual group attention network for medical image classification and segmentation, Med. Image Anal., 76 (2022), 102313. https://doi.org/10.1016/j.media.2021.102313 doi: 10.1016/j.media.2021.102313
    [4] M. Moghbel, S. Mashohor, R. Mahmud, M I. B. Saripan, Review of liver segmentation and computer assisted detection/diagnosis methods in computed tomography, Artif. Intell. Rev., 50 (2018), 497–537. https://doi.org/10.1007/s10462-017-9550-x doi: 10.1007/s10462-017-9550-x
    [5] B. Dourthe, N. Shaikh, S. A. Pai, S. Fels, S. H. M. Brown, D. R. Wilson, et al., Automated segmentation of spinal muscles from upright open MRI using a multiscale pyramid 2D convolutional neural network, Spine, 47 (2022), 1179–1186. https://doi.org/10.1097/BRS.0000000000004308 doi: 10.1097/BRS.0000000000004308
    [6] T. Zhou, L. Li, G. Bredell, J. Li, E. Konukoglu, Volumetric memory network for interactive medical image segmentation, Med. Image Anal., 83 (2023), 102599. https://doi.org/10.1016/j.media.2022.102599 doi: 10.1016/j.media.2022.102599
    [7] J. Long, E. Shelhamer, T. Darrell, Fully convolutional networks for semantic segmentation, IEEE Trans. Pattern Anal. Mach. Intell., 39 (2017), 640–651. https://doi.org/10.1109/TPAMI.2016.2572683 doi: 10.1109/TPAMI.2016.2572683
    [8] O. Ronneberger, P. Fischer, T. Brox, U-Net: Convolutional networksfor biomedical image segmentation, in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015, Springer, 9351 (2015), 234–241. https://doi.org/10.1007/978-3-319-24574-4_28
    [9] Z. W. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, J. M. Liang, UNet++: A Nested U-Net Architecture for Medical Image Segmentation, in Deep Learning in Medical Image Anylysis and Multimodal Learning for Clinical Decision Support, Springer, 11045 (2018), 3–11. https://doi.org/10.1007/978-3-030-00889-5_1
    [10] Z. Zhang, Q. Liu, Y. Wang, Road extraction by deep residual Unet, IEEE Geosci. Remote Sens. Lett., 15 (2018), 749–753. https://doi.org/10.1109/LGRS.2018.2802944 doi: 10.1109/LGRS.2018.2802944
    [11] O. Oktay, J. Schlemper, L. L. Folgoc, M. Lee, M. Heinrich, K. Misawa, et al., Attention U-net: learning where to look for the pancreas, arXiv preprint, 2018, arXiv: 1804.03999v3. https://doi.org/10.48550/arXiv.1804.03999
    [12] J. Chen, Y. Lu, Q. Yu, X. Luo, E. Adeli, Y. Wang, et al., Transunet: Transformers make strong encoders for medical image segmentation, arXiv preprint, 2021, arXiv: 2102.04306. https://doi.org/10.48550/arXiv.2102.04306
    [13] Y. Chen, B. Ma, Y. Xia, α-UNet++: A data-driven neural network architecture for medical image segmentation, in Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning, Springer, (2020), 3–12. https://doi.org/10.1007/978-3-030-60548-3_1
    [14] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in 2016 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 770–778. https://doi.org/10.1109/CVPR.2016.90
    [15] Z. Li, H. Zhang, Z. Li, Z. Ren, Residual-attention UNet++: a nested residual-attention U-Net for medical image segmentation, Appl. Sci., 12 (2022), 7149. https://doi.org/10.3390/app12147149 doi: 10.3390/app12147149
    [16] J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, et al., Dual attention network for scene segmentation, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 3146–3154. https://doi.org/10.1109/CVPR.2019.00326
    [17] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, et al., An image is worth 16x16 words: transformers for image recognition at scale, arXiv preprint, 2021, arXiv: 2010.11929. https://doi.org/10.48550/arXiv.2010.11929
    [18] G. Rani, A. Misra, V. S. Dhaka, D. Buddhi, R. Sharma, E. Zumpano, et al., A multi-modal bone suppression, lung segmentation, and classification approach for accurate COVID-19 detection using chest radiographs, Intell. Syst. Appl., 16 (2022), 200148. https://doi.org/10.1016/j.iswa.2022.200148 doi: 10.1016/j.iswa.2022.200148
    [19] G. Rani, A. Misra, V. S. Dhaka, E. Zumpano, E. Vocaturo, Spatial feature and resolution maximization GAN for bone suppression in chest radiographs, Comput. Methods Programs Biomed., 224 (2022), 107024. https://doi.org/10.1016/j.cmpb.2022.107024 doi: 10.1016/j.cmpb.2022.107024
    [20] G. Rani, P. Thakkar, A. Verma, V. Mehta, R. Chavan, V. Dhaka, et al., KUB-UNet: segmentation of organs of urinary system from a KUB X-ray image, Comput. Methods Programs Biomed., 224 (2022), 107031. https://doi.org/10.1016/j.cmpb.2022.107031 doi: 10.1016/j.cmpb.2022.107031
    [21] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. Gomez, et al., Attention is all you need, arXiv preprint, 2017, arXiv: 1706.03762. https://doi.org/10.48550/arXiv.1706.03762
    [22] J. M. J. Valanarasu, P. Oza, I. Hacihaliloglu, V. M. Patel, Medical transformer: Gated axial-attention for medical image segmentation, in Medical Image Computing and Computer Assisted Intervention–MICCAI, Springer, (2021), 36–46. https://doi.org/10.1007/978-3-030-87193-2_4
    [23] H. Huang, S. Xie, L. Lin, Y. Iwamoto, X. Han, Y. W. Chen, et al., ScaleFormer: Revisiting the transformer-based backbones from a scale-wise perspective for medical image segmentation, arXiv preprint, 2022, arXiv: 2207.14552. https://doi.org/10.48550/arXiv.2207.14552
    [24] Y. Zhang, H. Liu, Q. Hu, Transfuse: Fusing transformers and cnns for medical image segmentation, in Medical Image Computing and Computer Assisted Intervention–MICCAI 2021, Springer, (2021), 14–24. https://doi.org/10.1007/978-3-030-87193-2_2
    [25] H. Cao, Y. Wang, J. Chen, D. Jiang, X. Zhang, Q. Tian, et al., Swinunet: Unet-like pure transformer for medical image segmentation, arXiv preprint, arXiv: 2105.05537. https://doi.org/10.48550/arXiv.2105.05537
    [26] Y. Liu, N. Qi, Q. Zhu, W. Li, CR-U-Net: Cascaded U-net with residual mapping for liver segmentation in CT images, in IEEE Visual Communications and Image Processing (VCIP), (2019), 1–4. https://doi.org/10.1109/VCIP47243.2019.8966072
    [27] L. Hong, R. Wang, T. Lei, X. Du, Y. Wan, Qau-Net: Quartet attention U-net for liver and liver-tumor segmentation, in IEEE International Conference on Multimedia and Expo (ICME), (2021), 1–6. https://doi.org/10.1109/ICME51207.2021.9428427
    [28] J. You, P. L. Yu, A. C. Tsang, E. L. Tsui, P. P. Woo, C. S. Lui, et al., 3D dissimilar-siamese-U-Net for hyperdense middle cerebral artery sign segmentation, Comput. Med. Imaging Graphics, 90 (2021), 101898. https://doi.org/10.1016/j.compmedimag.2021.101898 doi: 10.1016/j.compmedimag.2021.101898
    [29] M. Jiang, F. Zhai, J. Kong, A novel deep learning model DDU-net using edge features to enhance brain tumor segmentation on MR images, Artif. Intell. Med., 121 (2021), 102180. https://doi.org/10.1016/j.artmed.2021.102180 doi: 10.1016/j.artmed.2021.102180
    [30] G. Huang, Z. Liu, L. Van Der Maaten, K. Q. Weinberger, Densely connected convolutional networks, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 4700–4708. https://doi.org/10.1109/CVPR.2017.243
    [31] E. Shibuya, K. Hotta, Feedback U-Net for cell image segmentation, in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), (2020), 4195–4203. https://doi.org/974-975.10.1109/CVPRW50498.2020.00495
    [32] D. Lin, Y. Li, T. L. Nwe, S. Dong, Z. Oo, RefineU-Net: Improved U-Net with progressive global feedbacks and residual attention guided local refinement for medical image segmentation, Pattern Recognit. Lett., 138 (2020), 267–275. https://doi.org/10.1016/j.patrec.2020.07.013 doi: 10.1016/j.patrec.2020.07.013
    [33] N. Ibtehaz, M. S. Rahman, MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation, Neural Networks, 121 (2020), 74–87. https://doi.org/10.1016/j.neunet.2019.08.025 doi: 10.1016/j.neunet.2019.08.025
    [34] J. M. J. Valanarasu, V. A. Sindagi, I. Hacihaliloglu, V. M. Patel, KiU-Net: Towards accurate segmentation of biomedical images using over-complete representations, in Medical Image Computing and Computer Assisted Intervention–MICCAI 2020, Springer, (2020), 363–373. https://doi.org/10.1007/978-3-030-59719-1_36
    [35] S. Woo, J. Park, J. Lee, I. Kweon, CBAM: Convolutional block attention module, in Proceedings of the European Conference on Computer Vision (ECCV), (2018), 3–19. https://doi.org/10.1007/978-3-030-01234-2_1
    [36] H. Zhao, H. Zhang, X. Zheng, A multiscale attention-guided UNet++ with edge constraint for building extraction from high spatial resolution imagery, Appl. Sci., 12 (2022), 5960. https://doi.org/10.3390/app12125960 doi: 10.3390/app12125960
    [37] Y. Pang, Y. Li, J. Shen, L. Shao, Towards bridging semantic gap to improve semantic segmentation, in 2019 IEEE/CVF International Conference on Computer Vision, (2019), 4230–4239. https://doi.org/10.1109/ICCV.2019.00433
    [38] W. Al-Dhabyani, M. Gomaa, H. Khaled, A. Fahmy, Dataset of breast ultrasound images, 2020. Available from: https://www.kaggle.com/datasets/aryashah2k/breast-ultrasound-images-dataset.
    [39] The International Skin Imaging Collaboration (ISIC 2018). Available from: https://challenge.isic-archive.com/landing/2018/.
    [40] A. D. Hoover, V. Kouznetsova, M. Goldbaum, Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response, IEEE Trans. Med. Imaging, 19 (2000), 203–210. https://doi.org/10.1109/42.845178 doi: 10.1109/42.845178
    [41] Y Gao, M Zhou, D Liu, Z. Yan, S. Zhang, D. Metaxas, A data-scalable transformer for medical image segmentation: architecture, model efficiency, and benchmark, arXiv preprint, 2023, arXiv: 2203.00131, 2022. https://doi.org/10.48550/arXiv.2203.00131
    [42] J. M. J. Valanarasu, V. M. Patel. Unext: Mlp-based rapid medical image segmentation network, in Medical Image Computing and Computer Assisted Intervention–MICCAI 2022, Springer, (2022), 23–33. https: /doi.org/10.1007/978-3-031-16443-9_3
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2002) PDF downloads(176) Cited by(7)

Article outline

Figures and Tables

Figures(5)  /  Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog