Image reconstruction is extremely important for computed tomography (CT) imaging, so it is significant to be continuously improved. The unfolding dynamics method combines a deep learning model with a traditional iterative algorithm. It is interpretable and has a fast reconstruction speed, but the essence of the algorithm is to replace the approximation operator in the optimization objective with a learning operator in the form of a convolutional neural network. In this paper, we firstly design a new iterator network (iNet), which is based on the universal approximation theorem and tries to simulate the functional relationship between the former and the latter in the maximum-likelihood expectation maximization (MLEM) algorithm. To evaluate the effectiveness of the method, we conduct experiments on a CT dataset, and the results show that our iNet method improves the quality of reconstructed images.
Citation: Limin Ma, Yudong Yao, Yueyang Teng. Iterator-Net: sinogram-based CT image reconstruction[J]. Mathematical Biosciences and Engineering, 2022, 19(12): 13050-13061. doi: 10.3934/mbe.2022609
Image reconstruction is extremely important for computed tomography (CT) imaging, so it is significant to be continuously improved. The unfolding dynamics method combines a deep learning model with a traditional iterative algorithm. It is interpretable and has a fast reconstruction speed, but the essence of the algorithm is to replace the approximation operator in the optimization objective with a learning operator in the form of a convolutional neural network. In this paper, we firstly design a new iterator network (iNet), which is based on the universal approximation theorem and tries to simulate the functional relationship between the former and the latter in the maximum-likelihood expectation maximization (MLEM) algorithm. To evaluate the effectiveness of the method, we conduct experiments on a CT dataset, and the results show that our iNet method improves the quality of reconstructed images.
[1] | H. Zhang, B. Liu, H. Yu, B. Dong, MetaInv-Net: meta inversion network for sparse view CT image reconstruction, IEEE Trans. Med. Imaging, 40 (2020), 621–634. https://doi.org/10.1109/TMI.2020.3033541 doi: 10.1109/TMI.2020.3033541 |
[2] | Y. Wei, M. Zhao, M. Zhao, M. Lei, ADMM-based decoder for binary linear codes aided by deep learning, IEEE Commun. Lett., 24 (2020), 1028–1032. https://doi.org/10.1109/LCOMM.2020.2974199 doi: 10.1109/LCOMM.2020.2974199 |
[3] | Y. Yang, J. Sun, H. Li, Z. Xu, Deep ADMM-Net for compressive sensing MRI, Adv. Neural Inf. Process. Syst., 29 (2016), 10–18. https://dl.acm.org/doi/10.5555/3157096.3157098 |
[4] | Y. Yang, J. Sun, H. Li, Z. Xu, ADMM-CSNet: a deep learning approach for image compressive sensing, IEEE Trans. Pattern Anal. Mach. Intell., 42 (2018), 521–538. https://doi.org/10.1109/TPAMI.2018.2883941 doi: 10.1109/TPAMI.2018.2883941 |
[5] | L. Yang, H. Wang, H. Qian, An ADMM-ResNet for data recovery in wireless sensor networks with guaranteed convergence, Digital Signal Process., 111 (2021), 102956. https://doi.org/10.1016/j.dsp.2020.102956 doi: 10.1016/j.dsp.2020.102956 |
[6] | J. M. Ramirez, J. I. Martínez-Torre, H. Arguello, LADMM-Net: an unrolled deep network for spectral image fusion from compressive data, Signal Process., 189 (2021), 108239. https://doi.org/10.1016/j.sigpro.2021.108239 doi: 10.1016/j.sigpro.2021.108239 |
[7] | K. Gong, D. Wu, K. Kim, Y. Yang, G. EI Fakhri, Y. Seo, et al., EMnet: an unrolled deep neural network for PET image reconstruction, Med. Imaging 2019: Phys. Med. Imaging, 10948 (2019), 1203–1208. https://doi.org/10.1117/12.2513096 doi: 10.1117/12.2513096 |
[8] | Y. Liu, Q. Liu, M. Zhang, Q. Yang, S. Wang, D. Liang, IFR-Net: iterative feature refinement network for compressed sensing MRI, IEEE Trans. Comput. Imaging, 6 (2019), 434–446. https://doi.org/10.1109/TCI.2019.2956877 doi: 10.1109/TCI.2019.2956877 |
[9] | J. Zhang, B. Ghanem, STA-Net: interpretable optimization-inspired deep network for image compressive sensing, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018 (2018), 1828–1837. https://doi.org/10.1109/CVPR.2018.00196 |
[10] | K. Jin, M. T. McCann, E. Froustey, M. Unser, Deep convolutional neural network for inverse problems in imaging, IEEE Trans. Image Process., 26 (2017), 4509–4522. https://doi.org/10.1109/TIP.2017.2713099 doi: 10.1109/TIP.2017.2713099 |
[11] | H. Chen, Y. Zhang, Y. Chen, J. Zhang, W. Zhang, H. Sun, et al., LEARN: learned experts' assessment-based reconstruction network for sparse-data CT, IEEE Trans. Med Imaging, 37 (2018), 1333–1347. https://doi.org/10.1109/TMI.2018.2805692 doi: 10.1109/TMI.2018.2805692 |
[12] | G. Cybenko, Approximation by superpositions of a sigmoidal function, Math. Control, Signals Syst., 2 (1989), 303–314. https://doi.org/10.1007/BF02551274 doi: 10.1007/BF02551274 |
[13] | K. Hornik, M. Stinchcombe, H. White, Multilayer feedforward networks are universal approximators, Neural Networks, 2 (1989), 359–366. https://doi.org/10.1016/0893-6080(89)90020-8 doi: 10.1016/0893-6080(89)90020-8 |
[14] | T. Chen, H. Chen, Approximations of continuous functionals by neural networks with application to dynamic systems, IEEE Trans. Neural Networks, 4 (1993), 910–918. https://doi.org/10.1109/72.286886 doi: 10.1109/72.286886 |
[15] | H. Mhaskar, N. Hahm, Neural networks for functional approximation and system identification, Neural Comput., 9 (1997), 143–159. https://doi.org/10.1162/neco.1997.9.1.143 doi: 10.1162/neco.1997.9.1.143 |
[16] | F. Rossi, B. Conan-Guez, Functional multi-layer perceptron: a non-linear tool for functional data analysis, Neural Networks, 18 (2005), 45–60. https://doi.org/10.1016/j.neunet.2004.07.001 doi: 10.1016/j.neunet.2004.07.001 |
[17] | D. P. Kingma, J. Ba, Adam: a method for stochastic optimization, preprint, arXiv: 1412.6980. |