Research article

dm-GAN: Distributed multi-latent code inversion enhanced GAN for fast and accurate breast X-ray image automatic generation


  • Received: 04 July 2023 Revised: 18 September 2023 Accepted: 09 October 2023 Published: 23 October 2023
  • Breast cancer seriously threatens women's physical and mental health. Mammography is one of the most effective methods for breast cancer diagnosis via artificial intelligence algorithms to identify diverse breast masses. The popular intelligent diagnosis methods require a large amount of breast images for training. However, collecting and labeling many breast images manually is extremely time consuming and inefficient. In this paper, we propose a distributed multi-latent code inversion enhanced Generative Adversarial Network (dm-GAN) for fast, accurate and automatic breast image generation. The proposed dm-GAN takes advantage of the generator and discriminator of the GAN framework to achieve automatic image generation. The new generator in dm-GAN adopts a multi-latent code inverse mapping method to simplify the data fitting process of GAN generation and improve the accuracy of image generation, while a multi-discriminator structure is used to enhance the discrimination accuracy. The experimental results show that the proposed dm-GAN can automatically generate breast images with higher accuracy, up to a higher 1.84 dB Peak Signal-to-Noise Ratio (PSNR) and lower 5.61% Fréchet Inception Distance (FID), as well as 1.38x faster generation than the state-of-the-art.

    Citation: Jiajia Jiao, Xiao Xiao, Zhiyu Li. dm-GAN: Distributed multi-latent code inversion enhanced GAN for fast and accurate breast X-ray image automatic generation[J]. Mathematical Biosciences and Engineering, 2023, 20(11): 19485-19503. doi: 10.3934/mbe.2023863

    Related Papers:

  • Breast cancer seriously threatens women's physical and mental health. Mammography is one of the most effective methods for breast cancer diagnosis via artificial intelligence algorithms to identify diverse breast masses. The popular intelligent diagnosis methods require a large amount of breast images for training. However, collecting and labeling many breast images manually is extremely time consuming and inefficient. In this paper, we propose a distributed multi-latent code inversion enhanced Generative Adversarial Network (dm-GAN) for fast, accurate and automatic breast image generation. The proposed dm-GAN takes advantage of the generator and discriminator of the GAN framework to achieve automatic image generation. The new generator in dm-GAN adopts a multi-latent code inverse mapping method to simplify the data fitting process of GAN generation and improve the accuracy of image generation, while a multi-discriminator structure is used to enhance the discrimination accuracy. The experimental results show that the proposed dm-GAN can automatically generate breast images with higher accuracy, up to a higher 1.84 dB Peak Signal-to-Noise Ratio (PSNR) and lower 5.61% Fréchet Inception Distance (FID), as well as 1.38x faster generation than the state-of-the-art.



    加载中


    [1] S. P. Zuckerman, B. L. Sprague, D. L. Weaver, S. Herschorn, E. Conant, Multicenter evaluation of breast cancer screening with digital breast tomosynthesis in combination with synthetic versus digital mammography, Radiology, 297 (2020), 545–553.
    [2] R. Shi, Q. Yao, L. Wu, J. Xu, Breast lesions: diagnosis using diffusion weighted imaging at 1.5 T and 3.0T—systematic review and meta-analysis, Clin. Breast Cancer, 18 (2018), 305–320. https://doi.org/10.1016/j.clbc.2017.06.011 doi: 10.1016/j.clbc.2017.06.011
    [3] E. A. Rafferty, J. M. Park, L. E. Philpotts, S. Poplack, J. Sumkin, E. Haipern, et al., Assessing radiologist performance using combined digital mammography and breast tomosynthesis compared with digital mammography alone: Results of a multicenter, multireader trial, Radiology, 266 (2013). https://doi.org/10.1148/radiol.12120674 doi: 10.1148/radiol.12120674
    [4] M. J. Li, Y. C. Yin, J. Wang, Y. F. Jiang, Green tea compounds in breast cancer prevention and treatment, World J. Clin. Oncol., 5 (2014), 520–528. http://doi.org/10.5306/wjco.v5.i3.520 doi: 10.5306/wjco.v5.i3.520
    [5] R. Shu, Principles and clinical applications of computer-aided diagnosis (CAD) (in Chinese), Chin. J. CT MRI, 2 (2004). https://doi.org/10.3969/j.issn.1672-5131.2004.02.016 doi: 10.3969/j.issn.1672-5131.2004.02.016
    [6] D. Ribli, A. Horváth, Z. Unger, P. Pollner, I. Csabai, Detecting and classifying lesions in mammograms with deep learning, Sci. Rep., 8 (2018), 4165. https://doi.org/10.1038/s41598-018-22437-z doi: 10.1038/s41598-018-22437-z
    [7] J. Long, E. Shelhamer, T. Darrell, Fully convolutional networks for semantic segmentation, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2015), 3431–3440. https://doi.org/10.1109/CVPR.2015.7298965
    [8] J. Arevalo, F. A. González, R. Ramos-Pollán, J. L. Oliveira, M. Lopez, Representation learning for mammography mass lesion classification with convolutional neural networks, Comput. Methods Programs Biomed., 127 (2016), 248–257. https://doi.org/10.1016/j.cmpb.2015.12.014 doi: 10.1016/j.cmpb.2015.12.014
    [9] M. Zhang, J. Huang, X. Xie, C. D'Arcy J. Holman, Dietary intakes of mushrooms and green tea combine to reduce the risk of breast cancer in Chinese women, Int. J. Cancer, 124 (2009), 1404–1408. https://doi.org/10.1002/ijc.24047 doi: 10.1002/ijc.24047
    [10] Ian J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, et al., Generative Adversarial Nets, in Proceedings of the 27th International Conference on Neural Information Processing Systems, (2014), 2672–2680. https://dl.acm.org/doi/10.5555/2969033.2969125
    [11] M. Arjovsky, S. Chintala, L. Bottou, Wasserstein Generative Adversarial Networks, in Proceedings of the 34th International Conference on Machine Learning, (2017), 214–223. https://dl.acm.org/doi/abs/10.5555/3305381.3305404
    [12] M. Mirza, S. Osindero, Conditional Generative Adversarial nets, arXiv preprint, (2014), arXiv: 1411.1784. https://doi.org/10.48550/arXiv.1411.1784
    [13] X. Yi, E. Walia, P. Babyn, Generative Adversarial Network in medical imaging: A review, Med. Image Anal., 58 (2019), 101552. https://doi.org/10.1016/j.media.2019.101552 doi: 10.1016/j.media.2019.101552
    [14] S. Nowozin, B. Cseke, R. Tomioka, F-GAN: training generative neural samplers using variational divergence minimization. in Proceedings of the 30th International Conference on Neural Information Processing Systems (NIPS'16), (2016), 271–279. https://dl.acm.org/doi/10.5555/3157096.3157127
    [15] X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, S. P. Smolley, Least squares Generative Adversarial Networks, in 2017 IEEE International Conference on Computer Vision (ICCV), (2017), 2813–2821. https://dl.acm.org/doi/10.1109/ICCV.2017.304
    [16] W. Li, J. Chen, J. Cao, C. Ma, J. Wang, X. Cui, et al., EID-GAN: Generative Adversarial Nets for extremely imbalanced data augmentation, IEEE Trans. Ind. Inf., 19 (2023), 3208–3218. https://doi.org/10.1109/TⅡ.2022.3182781 doi: 10.1109/TⅡ.2022.3182781
    [17] J. Y. Zhu, T. Park, P. Isola, A. A. Efros, Unpaired image-to-image translation using Cycle-Consistent Adversarial Networks, in IEEE International Conference on Computer Vision (ICCV), (2017), 2242–2251. https://doi.org/10.1109/ICCV.2017.244
    [18] P. Isola, J. Y. Zhu, T. Zhou, A. A. Efros, Image-to-image translation with Conditional Adversarial Networks, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 5967–5976, https://doi.org/10.1109/CVPR.2017.632
    [19] A. Radford, L. Metz, S. Chintala, Unsupervised representation learning with Deep Convolutional Generative Adversarial Networks, arXiv preprint, (2015), arXiv: 1511.06434. https://doi.org/10.48550/arXiv.1511.06434
    [20] J. Cao, M. Luo, J. Yu, M. H. Yang, R. He, ScoreMix: A scalable augmentation strategy for training GANs with limited data, IEEE Trans. Pattern Anal. Mach. Intell., 45 (2023), 8920–8935. https://doi.org/10.1109/TPAMI.2022.3231649 doi: 10.1109/TPAMI.2022.3231649
    [21] D. Nie, X. Cao, Y. Gao, L. Wang, D. Shen, Estimating CT image from MRI data using 3D fully convolutional networks, in Deep Learning and Data Labeling for Medical Applications, Springer, (2016). https://doi.org/10.1007/978-3-319-46976-8_18
    [22] J. M. Wolterink, T. Leiner, M. A. Viergever, I. Išgum, Generative Adversarial Networks for noise reduction in low-dose CT, IEEE Trans. Med. Imaging, 36 (2017), 2536–2545. https://doi.org/10.1109/TMI.2017.2708987 doi: 10.1109/TMI.2017.2708987
    [23] J. Jiang, Y. C. Hu, N. Tyagi, P. Zhang, A. Rimner, G. S. Mageras, et al., Tumor-aware, adversarial domain adaptation from CT to MRI for lung cancer segmentation, in Medical Image Computing and Computer Assisted Intervention–MICCAI 2018, (2018), 777–785. https://doi.org/10.1007/978-3-030-00934-2_86
    [24] A. Madani, M. Moradi, A. Karargyris, T. Syeda-Mahmood, Semi-supervised learning with generative adversarial networks for chest X-ray classification with ability of data domain adaptation, in 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), (2018), 1038–1042, https://doi.org/10.1109/ISBI.2018.8363749
    [25] B. Hu, Y. Tang, E. I. C. Chang, Y. Fan, M. Lai, Y. Xu, Unsupervised learning for cell-level visual representation in histopathology images with Generative Adversarial Networks, IEEE J. Biomed. Health. Inf., 23 (2019), 1316–1328. https://doi.org/10.1109/JBHI.2018.2852639 doi: 10.1109/JBHI.2018.2852639
    [26] Q. Chang, H. Qu, Y. Zhang, M. Sabuncu, C. Chen, T. Zhang, et al., Synthetic learning: Learn from distributed asynchronized discriminator GAN without sharing medical image data, in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 13853–13863. https://doi.org/10.1109/CVPR42600.2020.01387
    [27] A. Segato, V. Corbetta, M. D. Marzo, L. Pozzi, E. De Momi, Data augmentation of 3D brain environment using deep convolutional refined auto-encoding alpha GAN, IEEE Trans. Med. Rob. Bionics, 3 (2021), 269–272. https://doi.org/10.1109/TMRB.2020.3045230 doi: 10.1109/TMRB.2020.3045230
    [28] P. Tanachotnarangkun, S. Marukatat, I. Kumazawa, P. Chanvarasuth, P. Ruamviboonsuk, A. Amornpetchsathaporn, et al., A framework for generating an ICGA from a fundus image using GAN, in 2022 19th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), (2022), 1–4, https://doi.org/10.1109/ECTI-CON54298.2022.9795543
    [29] W. Cheng, J. Jiao, An adversarially consensus model of augmented unlabeled data for cardiac image segmentation (CAU+), Math. Biosci. Eng., 20 (2023), 13521–13541. https://doi.org/10.3934/mbe.2023603 doi: 10.3934/mbe.2023603
    [30] D. Pan, L. Jia, A. Zeng, X. Song, Application of generative adversarial networks in medical image processing, J. Biomed. Eng., 35 (2018), 970–976. https://doi.org/10.7507/1001-5515.201803025 doi: 10.7507/1001-5515.201803025
    [31] D. C. Dowson, B. V. Landau, The Fréchet distance between multivariate normal distributions, J. Multivar. Anal., 12 (1982), 450–455. https://doi.org/10.1016/0047-259X(82)90077-X doi: 10.1016/0047-259X(82)90077-X
    [32] J. Gu, Y. Shen, B. Zhou, Image processing using multi-code GAN prior, in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 3009–3018. https://doi.org/10.1109/CVPR42600.2020.00308
    [33] Z. Lipton, S. Tripathi, Precise recovery of latent vectors from Generative Adversarial Networks, arXiv preprint, (2017), arXiv: 1702.04782. https://doi.org/10.48550/arXiv.1702.04782
    [34] A. Creswell, A. A. Bharath, Inverting the generator of a Generative Adversarial Network, IEEE Trans. Neural Networks Learn. Syst., 30 (2019), 1967–1974. https://doi.org/10.1109/TNNLS.2018.2875194 doi: 10.1109/TNNLS.2018.2875194
    [35] F. Ma, U. Ayaz, S. Karaman, Invertibility of Convolutional Generative Networks from partial measurements, in Proceedings of the 32nd International Conference on Neural Information Processing Systems (NIPS'18), (2018), 9651–9660. https://dl.acm.org/doi/10.5555/3327546.3327632
    [36] G. Perarnau, J. van de Weijer, B. Raducanu, J. M. Álvarez, Invertible conditional GANs for image editing, arXiv preprint, (2016), arXiv: 1611.06355. https://doi.org/10.48550/arXiv.1611.06355
    [37] D. Bau, H. Strobelt, W. Peebles, J. Wulff, B. Zhou, J. Zhu, et al., Semantic photo manipulation with a generative image prior, 38 (2019), 1–11. https://doi.org/10.1145/3306346.3323023
    [38] D. P. Kingma, P. Dhariwal, Glow: Generative flow with invertible 1x1 convolutions, arXiv preprint, (2018), arXiv: 1807.03039. https://doi.org/10.48550/arXiv.1807.03039
    [39] C. Li, M. Wand, Precomputed real-time texture synthesis with Markovian Generative Adversarial Networks, in Computer Vision–ECCV 2016, Springer, (2016). https://doi.org/10.1007/978-3-319-46487-9_43
    [40] M. Heath, K. Bowyer, D. Kopans, The digital database for screening mammography, in Proceedings of the 5th International Workshop on Digital Mammography, (2000), 212–218.
    [41] M. Benndorf, C. Herda, M. Langer, E. Kotter, Provision of the DDSM mammography metadata in an accessible format, Med. Phys., 41 (2014), 051902. https://doi.org/10.1118/1.4870379 doi: 10.1118/1.4870379
    [42] K. Chen, Q. Qiao, Z. Song, Applications of Generative Adversarial Networks in medical images (in Chinese), Life Sci. Instrum., Z1 (2018).
    [43] R. K. Meleppat, P. Zhang, M. J. Ju, S. K. K. Manna, Y. Jian, E. N. Pugh, et al., Directional optical coherence tomography reveals melanin concentration-dependent scattering properties of retinal pigment epithelium, J. Biomed. Opt., 24 (2019). https://doi.org/10.1117/1.JBO.24.6.066011 doi: 10.1117/1.JBO.24.6.066011
    [44] D. Sakai, S. Takagi, K. Totani, M. Yamamoto, M. Matsuzaki, M. Yamanari, et al., Retinal pigment epithelium melanin imaging using polarization-sensitive optical coherence tomography for patients with retinitis pigmentosa, Sci. Rep., 12 (2022). https://doi.org/10.1038/s41598-022-11192-x doi: 10.1038/s41598-022-11192-x
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(794) PDF downloads(58) Cited by(0)

Article outline

Figures and Tables

Figures(5)  /  Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog