Super-resolution (SR) of magnetic resonance imaging (MRI) is gaining increasing attention for being able to provide detailed anatomical information. However, current SR methods often use the complex convolutional network for feature extraction, which is difficult to train and not suitable for limited computation resources in the medical scenario. To tackle these bottlenecks, we propose a multi-distillation residual network (MDRN) for more differential feature refinement, which has a superior trade-off between reconstruction accuracy and computation cost. Specifically, a novel feature multi-distillation residual block with a contrast-aware channel attention module was designed to make the residual features more focused on low-vision information, which maximizes the power of MDRN. Comprehensive experiments demonstrate the superiority of our MDRN over state-of-the-art methods in reconstruction quality and efficiency. Our method outperforms other existing methods in peak signal-noise ratio by up to 0.44–1.82 dB in 4× scale when GPU memory and runtime are lower than in other SR methods. The source code will be available at https://github.com/Jennieyy/MDRN.
Citation: Liwei Deng, Jingyi Chen, Xin Yang, Sijuan Huang. MDRN: Multi-distillation residual network for efficient MR image super-resolution[J]. Mathematical Biosciences and Engineering, 2024, 21(10): 7421-7434. doi: 10.3934/mbe.2024326
Super-resolution (SR) of magnetic resonance imaging (MRI) is gaining increasing attention for being able to provide detailed anatomical information. However, current SR methods often use the complex convolutional network for feature extraction, which is difficult to train and not suitable for limited computation resources in the medical scenario. To tackle these bottlenecks, we propose a multi-distillation residual network (MDRN) for more differential feature refinement, which has a superior trade-off between reconstruction accuracy and computation cost. Specifically, a novel feature multi-distillation residual block with a contrast-aware channel attention module was designed to make the residual features more focused on low-vision information, which maximizes the power of MDRN. Comprehensive experiments demonstrate the superiority of our MDRN over state-of-the-art methods in reconstruction quality and efficiency. Our method outperforms other existing methods in peak signal-noise ratio by up to 0.44–1.82 dB in 4× scale when GPU memory and runtime are lower than in other SR methods. The source code will be available at https://github.com/Jennieyy/MDRN.
[1] | E. Van Reeth, I. Tham, C. Tan, C. Poh, Super-resolution in magnetic resonance imaging: A review, Concepts Magn. Reson. Part A, 40 (2012), 306–325. https://doi.org/10.1002/cmr.a.21249 doi: 10.1002/cmr.a.21249 |
[2] | E. Plenge, D. Poot, M. Bernsen, G. Kotek, G. Houston, P. Wielopolski, et al., Super-resolution methods in MRI: Can they improve the trade-off between resolution, signal-to-noise ratio, and acquisition time, Magn. Reson. Med., 68 (2012), 1983–1993. https://doi.org/10.1002/mrm.24187 |
[3] | F. Shi, J. Cheng, L. Wang, P. Yap, D. Shen, Lrtv: MR image super-resolution with low-rank and total variation regularizations, IEEE Trans. Med. Imaging, 34 (2015), 2459–2466. https://doi.org/10.1109/TMI.2015.2437894 doi: 10.1109/TMI.2015.2437894 |
[4] | G. Litjens, T. Kooi, B. Bejnordi, A. Setio, F. Ciompi, M. Ghafoorian, et al., A survey on deep learning in medical image analysis, Med. Image Anal., 42 (2017), 60–88. https://doi.org/10.1016/j.media.2017.07.005 doi: 10.1016/j.media.2017.07.005 |
[5] | W. Yang, X. Zhang, Y. Tian, W. Wang, J. Xue, Q. Liao, Deep learning for single image super-resolution: A brief review, IEEE Trans. Multimedia, 21 (2019), 3106–3121. https://doi.org/10.1109/TMM.2019.2919431 doi: 10.1109/TMM.2019.2919431 |
[6] | D. Lepcha, B. Goyal, A. Dogra, V. Goyal, Image super-resolution: A comprehensive review, recent trends, challenges and applications, Inf. Fusion, 91 (2023), 230–260. https://doi.org/10.1016/j.inffus.2022.10.007 doi: 10.1016/j.inffus.2022.10.007 |
[7] | Z. Wang, J. Chen, H. Hoi, Deep learning for image super-resolution: A survey, IEEE Trans. Pattern Anal. Mach. Intell., 43 (2021), 3365–3387. https://doi.org/10.1109/TPAMI.2020.2982166 doi: 10.1109/TPAMI.2020.2982166 |
[8] | D. Qiu, Y. Cheng, X. Wang, Medical image super-resolution reconstruction algorithms based on deep learning: A survey, Comput. Methods Programs Biomed., 238 (2023), 107590–107599. https://doi.org/10.1016/j.cmpb.2023.107590 doi: 10.1016/j.cmpb.2023.107590 |
[9] | C. Dong, C. Loy, K. He, X. Tang, Image super-resolution using deep convolutional networks, IEEE Trans. Pattern Anal. Mach. Intell., 38 (2016), 295–307. https://doi.org/10.1109/TPAMI.2015.2439281 doi: 10.1109/TPAMI.2015.2439281 |
[10] | B. Lim, S. Son, H. Kim, S. Nah, K. Lee, Enhanced deep residual networks for single image super-resolution, in IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), (2017), 1132–1140. https://doi.org/10.1109/CVPRW.2017.151 |
[11] | Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, Y. Fu, Image super-resolution using very deep residual channel attention networks, in Computer Vision-ECCV 2018, (2018), 294–310. https://doi.org/10.1007/978-3-030-01234-2_18 |
[12] | J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, R. Timofte, SwinIR: Image restoration using swin transformer, in 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), (2021), 1833–1844. https://doi.org/10.1109/ICCVW54120.2021.00210 |
[13] | T. Dai, J. Cai, Y. Zhang, S. Xia; L. Zhang, Second-order attention network for single image super-resolution, in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 11057–11066. https://doi.org/10.1109/CVPR.2019.01132 |
[14] | B. Niu, W. Wen, W. Ren, X. Zhang, L. Yang, S. Wang, et al., Single image super-resolution via a holistic attention network, in European Conference on Computer Vision, (2020), 191–207. https://doi.org/10.1007/978-3-030-58610-2_12 |
[15] | Z. Li, G. Li, T. Li, S. Liu, W. Gao, Information-growth attention network for image super-resolution, in the 29th ACM International Conference on Multimedia, (2021), 544–552. https://doi.org/10.1145/3474085.3475207 |
[16] | Y. Li, Y. Zhang, R. Timofte, L. Van Gool, L. Yu, Y. Li, et al., NTIRE 2023 challenge on efficient super-resolution: Methods and results, in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), (2023), 1922–1960. |
[17] | Y. Li, K. Zhang, R. Timofte, L. Van Gool, F. Kong, M. Li, NTIRE 2022 challenge on efficient super-resolution: Methods and results, in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), (2022), 1061–1101. |
[18] | R. Sood, M. Rusu, Anisotropic super-resolution in prostate MRI using super-resolution generative adversarial networks, in IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), (2019), 1688–1691. https://doi.org/10.1109/ISBI.2019.8759237 |
[19] | Z. Hui, X. Gao, Y. Yang, X. Wang, Lightweight image super-resolution with information multi-distillation network, in the 27th ACM International Conference on Multimedia, (2019), 2024–2032. https://doi.org/10.1145/3343031.3351084 |
[20] | Y. Mao, N. Zhang, Q. Wang, B. Bai, W. Bai, H. Fang, Multi-level dispersion residual network for efficient image super-resolution, in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), (2023), 1660–1669. https://doi.org/10.1109/CVPRW59228.2023.00167 |
[21] | D. Haase, M. Amthor, Rethinking depthwise separable convolutions: How intra-kernel correlations lead to improved mobilenets, in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 14588–14597. https://doi.org/10.1109/CVPR42600.2020.01461 |
[22] | J. Shapey, A. Kujawa, R. Dorent, G. Wang, A. Dimitriadis, D. Grishchuk, et al., Segmentation of vestibular schwannoma from MRI: An open annotated dataset and baseline algorithm, Sci. Data, 286 (2021). https://doi.org/10.1038/s41597-021-01064-w |
[23] | J. Liu, J. Tang, G. Wu, Residual feature distillation network for lightweight image super-resolution, in Computer Vision-ECCV 2020 Workshops, (2020), 41–55. https://doi.org/10.1007/978-3-030-67070-2_2 |
[24] | L. Yu, X. Li, Y. Li, T. Jiang, Q. Wu, H. Fan, et al., Dipnet: Efficiency distillation and iterative pruning for image super-resolution, in IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), (2023), 1692–1701. |