Research article

MAU-Net: Mixed attention U-Net for MRI brain tumor segmentation

  • Received: 25 September 2023 Revised: 02 November 2023 Accepted: 06 November 2023 Published: 13 November 2023
  • Computer-aided brain tumor segmentation using magnetic resonance imaging (MRI) is of great significance for the clinical diagnosis and treatment of patients. Recently, U-Net has received widespread attention as a milestone in automatic brain tumor segmentation. Following its merits and motivated by the success of the attention mechanism, this work proposed a novel mixed attention U-Net model, i.e., MAU-Net, which integrated the spatial-channel attention and self-attention into a single U-Net architecture for MRI brain tumor segmentation. Specifically, MAU-Net embeds Shuffle Attention using spatial-channel attention after each convolutional block in the encoder stage to enhance local details of brain tumor images. Meanwhile, considering the superior capability of self-attention in modeling long-distance dependencies, an enhanced Transformer module is introduced at the bottleneck to improve the interactive learning ability of global information of brain tumor images. MAU-Net achieves enhancing tumor, whole tumor and tumor core segmentation Dice values of 77.88/77.47, 90.15/90.00 and 81.09/81.63% on the brain tumor segmentation (BraTS) 2019/2020 validation datasets, and it outperforms the baseline by 1.15 and 0.93% on average, respectively. Besides, MAU-Net also demonstrates good competitiveness compared with representative methods.

    Citation: Yuqing Zhang, Yutong Han, Jianxin Zhang. MAU-Net: Mixed attention U-Net for MRI brain tumor segmentation[J]. Mathematical Biosciences and Engineering, 2023, 20(12): 20510-20527. doi: 10.3934/mbe.2023907

    Related Papers:

  • Computer-aided brain tumor segmentation using magnetic resonance imaging (MRI) is of great significance for the clinical diagnosis and treatment of patients. Recently, U-Net has received widespread attention as a milestone in automatic brain tumor segmentation. Following its merits and motivated by the success of the attention mechanism, this work proposed a novel mixed attention U-Net model, i.e., MAU-Net, which integrated the spatial-channel attention and self-attention into a single U-Net architecture for MRI brain tumor segmentation. Specifically, MAU-Net embeds Shuffle Attention using spatial-channel attention after each convolutional block in the encoder stage to enhance local details of brain tumor images. Meanwhile, considering the superior capability of self-attention in modeling long-distance dependencies, an enhanced Transformer module is introduced at the bottleneck to improve the interactive learning ability of global information of brain tumor images. MAU-Net achieves enhancing tumor, whole tumor and tumor core segmentation Dice values of 77.88/77.47, 90.15/90.00 and 81.09/81.63% on the brain tumor segmentation (BraTS) 2019/2020 validation datasets, and it outperforms the baseline by 1.15 and 0.93% on average, respectively. Besides, MAU-Net also demonstrates good competitiveness compared with representative methods.



    加载中


    [1] P. Y. Wen, R. J. Packer, The 2021 WHO classification of tumors of the central nervous system: clinical implications. Neuro-oncology, 21 (2021), 1215–1217. https://doi.org/10.1093/neuonc/noab120 doi: 10.1093/neuonc/noab120
    [2] Z. K. Jiang, X. G. Lyu, J. X. Zhang, Q. Zhang, X. P. Wei, Review of deep learning methods for MRI brain tumor image segmentation, J. Image Graphics, 25 (2020), 215–228.
    [3] S. Pereira, A. Pinto, V. Alves, C. A. Silva, Brain tumor segmentation using convolutional neural networks in MRI images, IEEE Trans. Med. Imaging, 35 (2016), 1240–1251. https://doi.org/10.1109/TMI.2016.2538465 doi: 10.1109/TMI.2016.2538465
    [4] Z. Zhu, X. He, G. Qi, Y. Li, B. Cong, Y. Liu, Brain tumor segmentation based on the fusion of deep semantics and edge information in multimodal MRI, Inf. Fusion, 91 (2023), 376–387. https://doi.org/10.1016/j.inffus.2022.10.022 doi: 10.1016/j.inffus.2022.10.022
    [5] R. Ranjbarzadeh, A. B. Kasgari, S. J. Ghoushchi, S. Anari, M. Naseri, M. Bendechache, Brain tumor segmentation based on deep learning and an attention mechanism using MRI multi‐modalities brain images, Sci. Rep., 11 (2021), 10930.
    [6] R. Ranjbarzadeh, P. Zarbakhsh, A. Caputo, E. B. Tirkolaee, M. Bendechache, Brain tumor segmentation based on an optimized convolutional neural network and an improved chimp optimization algorithm, SSRN, 2022 (2022), forthcoming. https://dx.doi.org/10.2139/ssrn.4295236 doi: 10.2139/ssrn.4295236
    [7] O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, (2015), 234–241. https://doi.org/10.1007/978-3-319-24574-4_28
    [8] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, O. Ronneberger, 3D U-Net: learning dense volumetric segmentation from sparse annotation, in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, (2016), 424–432. https://doi.org/10.1007/978-3-319-46723-8_49
    [9] J. K. Sun, R. Zhang, L. J. Guo, Multi-scale feature fusion and additive attention guide brain tumor MR image segmentation, J. Image Graphics, 28 (2023), 1157–1172.
    [10] J. Zhang, Z. Jiang, J. Dong, Y. Hou, B. Liu, Attention gate resU-Net for automatic MRI brain tumor segmentation, IEEE Access, 8 (2020), 58533–58545. https://doi.org/10.1109/ACCESS.2020.2983075 doi: 10.1109/ACCESS.2020.2983075
    [11] A. S. Akbar, C. Fatichah, N. Suciati, Single level UNet3D with multipath residual attention block for brain tumor segmentation, J. King Saud Univ. Comput. Inf. Sci., 34 (2022), 3247–3258. https://doi.org/10.1016/j.jksuci.2022.03.022 doi: 10.1016/j.jksuci.2022.03.022
    [12] D. Liu, N. Sheng, T. He, W. Wang, J. Zhang, J. Zhang, SGEResU-Net for brain tumor segmentation, Math. Biosci. Eng., 19 (2022), 5576–5590. https://doi.org/10.3934/mbe.2022261 doi: 10.3934/mbe.2022261
    [13] D. Liu, N. Sheng, Y. Han, Y. Hou, B. Liu, J. Zhang, et al., SCAU-net: 3D self-calibrated attention U-Net for brain tumor segmentation, Neural Comput. Appl., 35 (2023), 23973–23985.
    [14] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, et al., Attention is all you need, Adv. Neural Inf. Process. Syst., 30 (2017).
    [15] W. Wang, C. Chen, M. Ding, H. Yu, S. Zha, J. Li, Transbts: Multimodal brain tumor segmentation using transformer, in Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, (2021), 109–119.
    [16] Y. Jiang, Y. Zhang, X. Lin, J. Dong, T. Cheng, J. Liang, SwinBTS: A method for 3D multimodal brain tumor segmentation using swin transformer, Brain Sci., 12 (2022), 797. https://doi.org/10.3390/brainsci12060797 doi: 10.3390/brainsci12060797
    [17] Y. Xu, X. He, G. Xu, G. Qi, K. Yu, L. Yin, et al., A medical image segmentation method based on multi-dimensional statistical features, Front. Neurosci., 16 (2022), 1009581. https://doi.org/10.3389/fnins.2022.1009581 doi: 10.3389/fnins.2022.1009581
    [18] Q. L. Zhang, Y. B. Yang, Sa-net: Shuffle attention for deep convolutional neural networks, in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (2021), 2235–2239. https://doi.org/10.1109/ICASSP39728.2021.9414568
    [19] C. Yang, Y. Wang, J. Zhang, H. Zhang, Z. Wei, Z. Lin, et al., Lite vision transformer with enhanced self-attention, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2022), 11998–12008. https://doi.org/10.48550/arXiv.2112.10809
    [20] Y. Cao, J. Xu, S. Lin, F. Wei, H. Hu, Gcnet: Non-local networks meet squeeze-excitation networks and beyond, in Proceedings of the IEEE/CVF international conference on computer vision workshops, (2019), 1971–1980.
    [21] S. Woo, J. Park, J. Lee, Cbam: Convolutional block attention module, in Proceedings of the European conference on computer vision (ECCV), (2018), 3–19.
    [22] X. Li, X. Hu, J. Yang, Spatial group-wise enhance: Improving semantic feature learning in convolutional networks, preprint, arXiv: 1905.09646.
    [23] N. Ma, X. Zhang, H. T. Zheng, J. Sun, Shufflenet v2: Practical guidelines for efficient cnn architecture design, in Proceedings of the European conference on computer vision (ECCV), (2018), 116–131. https://doi.org/10.48550/arXiv.1807.11164
    [24] H. Zhao, J. Shi, X. Qi, Pyramid scene parsing network, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2017), 2881–2890.
    [25] L. C. Chen, G. Papandreou, F. Schroff, H. Adam, Rethinking atrous convolution for semantic image segmentation, preprint, arXiv: 1706.05587.
    [26] D. Hendrycks, K. Gimpel, Gaussian error linear units (gelus), preprint, arXiv: 1606.08415.
    [27] P. Ramachandran, B. Zoph, Q. V. Le, Searching for activation functions, preprint, arXiv: 1710.05941.
    [28] F. Milletari, N. Navab, S. A. Ahmadi, V-net: Fully convolutional neural networks for volumetric medical image segmentation, in 2016 Fourth International Conference on 3D Vision (3DV), (2016), 565–571. https://doi.org/10.1109/3DV.2016.79
    [29] F. Isensee, P. F. Jäger, P. M. Full, P. Vollmuth, K. H. Maier-Hein, nnU-Net for brain tumor segmentation, in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th International Workshop, 2021. https://doi.org/10.1007/978-3-030-72087-2_11
    [30] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. S. Kirby, et al., Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features, Sci. Data, 4 (2017), 1–13.
    [31] S. Bakas, M. Reyes, A. Jakab, S. Bauer, M. Rempfler, A. Crimi, et al., Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge, preprint, arXiv: 1811.02629.
    [32] S. S. Sastry, Advanced Engineering Mathematics, Jones & Bartlett Learning.
    [33] L. R. Dice, Measures of the amount of ecologic association between species, Ecology, 26 (1945), 297–302. https://doi.org/10.2307/1932409 doi: 10.2307/1932409
    [34] O. Oktay, J. Schlemper, L. L. Folgoc, M. Lee, M. Heinrich, K. Misawa, Attention u-net: Learning where to look for the pancreas, preprint, arXiv: 1804.03999.
    [35] J. Tong, C. Wang, A dual tri-path CNN system for brain tumor segmentation, Biomed. Signal Process. Control, 81 (2023), 104411. https://doi.org/10.1016/j.bspc.2022.104411 doi: 10.1016/j.bspc.2022.104411
    [36] Y. Xue, M. Xie, F. G. Farhat, O. Boukrina, A. M. Barrett, J. R. Binder, et al., A multi-path decoder network for brain tumor segmentation, in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 5th International Workshop, BrainLes 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, (2020), 255–265.
    [37] J. Sun, Y. Peng, D. Li, Y. Guo, Segmentation of the multimodal brain tumor images used Res-U-Net, in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, (2021), 263–273.
    [38] K. Cheng, C. Hu, P. Yin, Q. Su, G. Zhou, X. Wu, et al., Glioma sub-region segmentation on Multi-parameter MRI with label dropout, in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, (2021), 420–430.
    [39] Y. Peng, J. Sun, The multimodal MRI brain tumor segmentation based on AD-Net, Biomed. Signal Process. Control, 80 (2023), 104336. https://doi.org/10.1016/j.bspc.2022.104336 doi: 10.1016/j.bspc.2022.104336
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1682) PDF downloads(130) Cited by(4)

Article outline

Figures and Tables

Figures(6)  /  Tables(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog