In order to generate high-quality single-photon emission computed tomography (SPECT) images under low-dose acquisition mode, a sinogram denoising method was studied for suppressing random oscillation and enhancing contrast in the projection domain. A conditional generative adversarial network with cross-domain regularization (CGAN-CDR) is proposed for low-dose SPECT sinogram restoration. The generator stepwise extracts multiscale sinusoidal features from a low-dose sinogram, which are then rebuilt into a restored sinogram. Long skip connections are introduced into the generator, so that the low-level features can be better shared and reused, and the spatial and angular sinogram information can be better recovered. A patch discriminator is employed to capture detailed sinusoidal features within sinogram patches; thereby, detailed features in local receptive fields can be effectively characterized. Meanwhile, a cross-domain regularization is developed in both the projection and image domains. Projection-domain regularization directly constrains the generator via penalizing the difference between generated and label sinograms. Image-domain regularization imposes a similarity constraint on the reconstructed images, which can ameliorate the issue of ill-posedness and serves as an indirect constraint on the generator. By adversarial learning, the CGAN-CDR model can achieve high-quality sinogram restoration. Finally, the preconditioned alternating projection algorithm with total variation regularization is adopted for image reconstruction. Extensive numerical experiments show that the proposed model exhibits good performance in low-dose sinogram restoration. From visual analysis, CGAN-CDR performs well in terms of noise and artifact suppression, contrast enhancement and structure preservation, particularly in low-contrast regions. From quantitative analysis, CGAN-CDR has obtained superior results in both global and local image quality metrics. From robustness analysis, CGAN-CDR can better recover the detailed bone structure of the reconstructed image for a higher-noise sinogram. This work demonstrates the feasibility and effectiveness of CGAN-CDR in low-dose SPECT sinogram restoration. CGAN-CDR can yield significant quality improvement in both projection and image domains, which enables potential applications of the proposed method in real low-dose study.
Citation: Si Li, Limei Peng, Fenghuan Li, Zengguo Liang. Low-dose sinogram restoration enabled by conditional GAN with cross-domain regularization in SPECT imaging[J]. Mathematical Biosciences and Engineering, 2023, 20(6): 9728-9758. doi: 10.3934/mbe.2023427
In order to generate high-quality single-photon emission computed tomography (SPECT) images under low-dose acquisition mode, a sinogram denoising method was studied for suppressing random oscillation and enhancing contrast in the projection domain. A conditional generative adversarial network with cross-domain regularization (CGAN-CDR) is proposed for low-dose SPECT sinogram restoration. The generator stepwise extracts multiscale sinusoidal features from a low-dose sinogram, which are then rebuilt into a restored sinogram. Long skip connections are introduced into the generator, so that the low-level features can be better shared and reused, and the spatial and angular sinogram information can be better recovered. A patch discriminator is employed to capture detailed sinusoidal features within sinogram patches; thereby, detailed features in local receptive fields can be effectively characterized. Meanwhile, a cross-domain regularization is developed in both the projection and image domains. Projection-domain regularization directly constrains the generator via penalizing the difference between generated and label sinograms. Image-domain regularization imposes a similarity constraint on the reconstructed images, which can ameliorate the issue of ill-posedness and serves as an indirect constraint on the generator. By adversarial learning, the CGAN-CDR model can achieve high-quality sinogram restoration. Finally, the preconditioned alternating projection algorithm with total variation regularization is adopted for image reconstruction. Extensive numerical experiments show that the proposed model exhibits good performance in low-dose sinogram restoration. From visual analysis, CGAN-CDR performs well in terms of noise and artifact suppression, contrast enhancement and structure preservation, particularly in low-contrast regions. From quantitative analysis, CGAN-CDR has obtained superior results in both global and local image quality metrics. From robustness analysis, CGAN-CDR can better recover the detailed bone structure of the reconstructed image for a higher-noise sinogram. This work demonstrates the feasibility and effectiveness of CGAN-CDR in low-dose SPECT sinogram restoration. CGAN-CDR can yield significant quality improvement in both projection and image domains, which enables potential applications of the proposed method in real low-dose study.
[1] | H. A. Ziessman, J. P. O'Malley, J. H. Thrall, Nuclear Medicine: The Requisites, Elsevier, 2014. |
[2] | P. Ritt, H. Vija, J. Hornegger, T. Kuwert, Absolute quantification in SPECT, Eur. J. Nucl. Med. Mol. Imaging, 38 (2011), 69–77. https://doi.org/10.1007/s00259-011-1770-8 doi: 10.1007/s00259-011-1770-8 |
[3] | X. Niu, Y. Yang, M. Jin, M. N. Wernick, M. A. King, Effects of motion, attenuation, and scatter corrections on gated cardiac SPECT reconstruction, Med. Phys., 38 (2011), 6571–6584. https://doi.org/10.1118/1.3660328 doi: 10.1118/1.3660328 |
[4] | R. G. Wells, Dose reduction is good but it is image quality that matters, J. Nucl. Cardiol., 1 (2018), 1–3. https://doi.org/10.1007/s12350-018-1378-5 doi: 10.1007/s12350-018-1378-5 |
[5] | J. Zhang, S. Li, A. Krol, C. R. Schmidtlein, E. Lipson, D. Feiglin, et al., Infimal convolution-based regularization for SPECT reconstruction, Med. Phys., 45 (2018), 5397–5410. https://doi.org/10.1002/mp.13226 doi: 10.1002/mp.13226 |
[6] | A. Krol, S. Li, L. Shen, Y. Xu, Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction, Inverse Probl., 28 (2012), 115005. https://doi.org/10.1088/0266-5611/28/11/115005 doi: 10.1088/0266-5611/28/11/115005 |
[7] | Y. Luo, M. Wei, S. Li, J. Ling, G. Xie, S. Yao, An effective co-support guided analysis model for multi-contrast MRI reconstruction, IEEE J. Biomed. Health, (2023). https://doi.org/10.1109/JBHI.2023.3244669 doi: 10.1109/JBHI.2023.3244669 |
[8] | H. Zhang, B. Dong, A review on deep learning in medical image reconstruction, J. Oper. Res. Soc. China, 8 (2020), 311–340. https://doi.org/10.1007/s40305-019-00287-4 doi: 10.1007/s40305-019-00287-4 |
[9] | I. Häggström, C. R. Schmidtlein, G. Campanella, T. J. Fuchs, DeepPET: A deep encoder-decoder network for directly solving the PET reconstruction inverse problem, Med. Image Anal., 54 (2019), 253–262. https://doi.org/10.1016/j.media.2019.03.013 doi: 10.1016/j.media.2019.03.013 |
[10] | B. Zhu, J. Z. Liu, S. F. Cauley, B. R. Rosen, M. S. Rosen, Image reconstruction by domain-transform manifold learning, Nature, 555 (2018), 487–492. https://doi.org/10.1038/nature25988 doi: 10.1038/nature25988 |
[11] | H. Zhang, B. Dong, B. Liu, JSR-Net: A deep network for Joint Spatial-Radon domain CT reconstruction from incomplete data, in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (2019), 3657–3661. https://doi.org/10.1109/ICASSP.2019.8682178 |
[12] | Y. Yang, H. Li, Z. Xu, J. Sun, Deep ADMM-Net for compressive sensing MRI, Adv. Neural Inf. Process. Syst., 29 (2016), 10–18. |
[13] | Y. Yang, H. Li, Z. Xu, ADMM-CSNet: A deep learning approach for image compressive sensing, IEEE Trans. Pattern Anal. Mach. Intell., 42 (2018), 521–538. https://doi.org/10.1109/TPAMI.2018.2883941 doi: 10.1109/TPAMI.2018.2883941 |
[14] | J. Adler, O. Öktem, Learned primal-dual reconstruction, IEEE Trans. Med. Imaging, 37 (2018), 1322–1332. https://doi.org/10.1109/TMI.2018.2799231 doi: 10.1109/TMI.2018.2799231 |
[15] | B. Zhou, X. Chen, S. K. Zhou, J. S. Duncan, C. Liu. DuDoDR-Net: Dual-domain data consistent recurrent network for simultaneous sparse view and metal artifact reduction in computed tomography, Med. Image Anal., 75 (2022), 102289. https://doi.org/10.1016/j.media.2021.102289 doi: 10.1016/j.media.2021.102289 |
[16] | M. Li, W. Hsu, X. Xie, J. Cong, W. Gao. SACNN: Self-attention convolutional neural network for low-dose CT denoising with self-supervised perceptual loss network, IEEE Trans. Med. Imaging, 39 (2020), 2289–2301. https://doi.org/10.1109/TMI.2020.2968472 doi: 10.1109/TMI.2020.2968472 |
[17] | W. Bae, J. J. Yoo, J. C. Ye, Beyond deep residual learning for image restoration: persistent homology-guided manifold simplification, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, (2017), 145–153. https://doi.org/10.1109/CVPRW.2017.152 |
[18] | H. Chen, Y. Zhang, M. K. Kalra, F. Lin, Y. Chen, P. Liao, et al., Low-dose CT with a residual encoder-decoder convolutional neural network, IEEE Trans. Med. Imaging, 36 (2017), 2524–2535. https://doi.org/10.1109/TMI.2017.2715284 doi: 10.1109/TMI.2017.2715284 |
[19] | Z. Zhang, X. Liang, X. Dong, Y. Xie, G. Cao, A sparse-view CT reconstruction method based on combination of DenseNet and deconvolution, IEEE Trans. Med. Imaging, 37 (2018), 1407–1417. https://doi.org/10.1109/TMI.2018.2823338 doi: 10.1109/TMI.2018.2823338 |
[20] | J. C. Ye, Y. Han, E. Cha, Deep convolutional framelets: A general deep learning framework for inverse problems, SIAM J. Imaging Sci., 11 (2018), 991–1048. https://doi.org/10.1137/17M1141771 doi: 10.1137/17M1141771 |
[21] | Y. Han, J. C. Ye, Framing U-Net via deep convolutional framelets: Application to sparse-view CT, IEEE Trans. Med. Imaging, 37 (2018), 1418–1429. https://doi.org/10.1109/TMI.2018.2823768 doi: 10.1109/TMI.2018.2823768 |
[22] | C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Reed, et al., Going deeper with convolutions, in IEEE Conference on Computer Vision and Pattern Recognition, (2015), 1–9. https://doi.org/10.1109/CVPR.2015.7298594 |
[23] | S. Xie, X. Zheng, Y. Chen, L. Xie, J. liu, Y. Zhang, et al., Artifact removal using improved GoogLeNet for sparse-view CT reconstruction, Sci. Rep., 8 (2018), 1–9. https://doi.org/10.1038/s41598-018-25153-w doi: 10.1038/s41598-018-25153-w |
[24] | H. Lee, J. Lee, H. Kim, B. Cho, S. Cho, Deep-neural-network-based sinogram synthesis for sparse-view CT image reconstruction, IEEE Trans. Radiat. Plasma Med. Sci., 3 (2018), 109–119. https://doi.org/10.1109/TRPMS.2018.2867611 doi: 10.1109/TRPMS.2018.2867611 |
[25] | B. Pan, N. Qi, Q. Meng, J. Wang, S. Peng, C. Qi, et al., Ultra high speed SPECT bone imaging enabled by a deep learning enhancement method: A proof of concept, EJNMMI Phys., 9 (2022), 1–15. https://doi.org/10.1186/s40658-022-00472-0 doi: 10.1186/s40658-022-00472-0 |
[26] | H. Yuan, J. Jia, Z. Zhu, Sipid: A deep learning framework for sinogram interpolation and image denoising in low-dose CT reconstruction, in IEEE 15th International Symposium on Biomedical Imaging (ISBI), (2018), 1521–1524. https://doi.org/10.1109/ISBI.2018.8363862 |
[27] | X. Dong, S. Vekhande, G. Cao, Sinogram interpolation for sparse-view micro-CT with deep learning neural network, in SPIE Medical Imaging 2019: Physics of Medical Imaging, (2019), 109482O. https://doi.org/10.1117/12.2512979 |
[28] | C. Chrysostomou, L. Koutsantonis, C. Lemesios, C. N. Papanicolas, SPECT angle interpolation based on deep learning methodologies, in IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), (2020), 1–4. https://doi.org/10.1109/NSS/MIC42677.2020.9507966 |
[29] | I. Shiri, P. Sheikhzadeh, M. R. Ay, Deep-fill: Deep learning based sinogram domain gap filling in positron emission tomography, preprint, arXiv: 1906.07168. https://doi.org/10.48550/arXiv.1906.07168 |
[30] | S. Li, W. Ye, F. Li, LU-Net: Combining LSTM and U-Net for sinogram synthesis in sparse-view SPECT reconstruction, Math. Biosci. Eng., 19 (2022), 4320–4340. https://doi.org/10.3934/mbe.2022200 doi: 10.3934/mbe.2022200 |
[31] | E. Xie, P. Ni, R. Zhang, X. Li. Limited-angle CT Reconstruction with generative adversarial network sinogram inpainting and unsupervised artifact removal, Appl. Sci., 12 (2022), 6268. https://doi.org/10.3390/app12126268 doi: 10.3390/app12126268 |
[32] | I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde‐Farley, S. Ozair, et al., Generative adversarial nets, Commun. ACM, 63 (2020), 139–144. https://doi.org/10.1145/3422622 doi: 10.1145/3422622 |
[33] | C. Tang, W. Zhang, L. Wang, A. Cai, N. Liang, L. Li, et al., Generative adversarial network-based sinogram super-resolution for computed tomography imaging, Phys. Med. Biol., 65 (2020), 235006. https://doi.org/10.1088/1361-6560/abc12f doi: 10.1088/1361-6560/abc12f |
[34] | Z. Li, W. Zhang, L. Wang, A. Cai, N. Liang, B. Yan, et al., A sinogram inpainting method based on generative adversarial network for limited-angle computed tomography, in 15th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, (2019), 345–349. https://doi.org/10.1117/12.2533757 |
[35] | Y. Wang, W. Zhang, A. Cai, L. Wang, C. Tang, Z. Feng, et al., An effective sinogram inpainting for complementary limited-angle dual-energy computed tomography imaging using generative adversarial networks, J. X-Ray Sci. Technol., 29 (2021), 37–61. https://doi.org/10.3233/XST-200736 doi: 10.3233/XST-200736 |
[36] | M. Mirza, S. Osindero, Conditional generative adversarial nets, preprint, arXiv: 1411.1784. https://doi.org/10.48550/arXiv.1411.1784 |
[37] | G. E. Hinton, R. R. Salakhutdinov, Reducing the dimensionality of data with neural networks, Science, 313 (2006), 504–507. https://doi.org/10.1126/science.1127647 doi: 10.1126/science.1127647 |
[38] | P. Isola, J. Y. Zhu, T. Zhou, A. A. Efros, Image-to-image translation with conditional adversarial networks, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 1125–1134. https://doi.org/10.1109/CVPR.2017.632 |
[39] | K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, preprint, arXiv: 1409.1556. https://doi.org/10.48550/arXiv.1409.1556 |
[40] | A. Odena, C. Odena, J. Odena, Conditional image synthesis with auxiliary classifier GANs, in The 34th International Conference on Machine Learning, 70 (2017), 2642–2651. |
[41] | C. A. Micchelli, L. Shen, Y. Xu, Proximity algorithms for image models: denoising, Inverse Probl., 27 (2011), 045009. https://doi.org/10.1088/0266-5611/27/4/045009 doi: 10.1088/0266-5611/27/4/045009 |
[42] | M. Ljungberg, S. E. Strand, M. A. King, Monte Carlo Calculations in Nuclear Medicine: Applications In Diagnostic Imaging, CRC Press, 2012. https://doi.org/10.1201/b13073 |
[43] | M. Morphis, J. Staden, H. D. Raan, M. Ljungberg, Modelling of energy-dependent spectral resolution for SPECT Monte Carlo simulations using SIMIND, Heliyon, 7 (2021), e06097. https://doi.org/10.1016/j.heliyon.2021.e06097 doi: 10.1016/j.heliyon.2021.e06097 |
[44] | S. Peltonen, U. Tuna, E. Sanchez-Monge, U. Ruotsalainen, PET sinogram denoising by block-matching and 3D filtering, in 2011 IEEE Nuclear Science Symposium Conference Record, (2011), 3125–3129. https://doi.org/10.1109/NSSMIC.2011.6152568 |