Research article

Lesion detection of chest X-Ray based on scalable attention residual CNN


  • Received: 03 July 2022 Revised: 17 September 2022 Accepted: 19 October 2022 Published: 04 November 2022
  • Most of the research on disease recognition in chest X-rays is limited to segmentation and classification, but the problem of inaccurate recognition in edges and small parts makes doctors spend more time making judgments. In this paper, we propose a lesion detection method based on a scalable attention residual CNN (SAR-CNN), which uses target detection to identify and locate diseases in chest X-rays and greatly improves work efficiency. We designed a multi-convolution feature fusion block (MFFB), tree-structured aggregation module (TSAM), and scalable channel and spatial attention (SCSA), which can effectively alleviate the difficulties in chest X-ray recognition caused by single resolution, weak communication of features of different layers, and lack of attention fusion, respectively. These three modules are embeddable and can be easily combined with other networks. Through a large number of experiments on the largest public lung chest radiograph detection dataset, VinDr-CXR, the mean average precision (mAP) of the proposed method was improved from 12.83% to 15.75% in the case of the PASCAL VOC 2010 standard, with IoU > 0.4, which exceeds the existing mainstream deep learning model. In addition, the proposed model has a lower complexity and faster reasoning speed, which is conducive to the implementation of computer-aided systems and provides referential solutions for relevant communities.

    Citation: Cong Lin, Yiquan Huang, Wenling Wang, Siling Feng, Mengxing Huang. Lesion detection of chest X-Ray based on scalable attention residual CNN[J]. Mathematical Biosciences and Engineering, 2023, 20(2): 1730-1749. doi: 10.3934/mbe.2023079

    Related Papers:

  • Most of the research on disease recognition in chest X-rays is limited to segmentation and classification, but the problem of inaccurate recognition in edges and small parts makes doctors spend more time making judgments. In this paper, we propose a lesion detection method based on a scalable attention residual CNN (SAR-CNN), which uses target detection to identify and locate diseases in chest X-rays and greatly improves work efficiency. We designed a multi-convolution feature fusion block (MFFB), tree-structured aggregation module (TSAM), and scalable channel and spatial attention (SCSA), which can effectively alleviate the difficulties in chest X-ray recognition caused by single resolution, weak communication of features of different layers, and lack of attention fusion, respectively. These three modules are embeddable and can be easily combined with other networks. Through a large number of experiments on the largest public lung chest radiograph detection dataset, VinDr-CXR, the mean average precision (mAP) of the proposed method was improved from 12.83% to 15.75% in the case of the PASCAL VOC 2010 standard, with IoU > 0.4, which exceeds the existing mainstream deep learning model. In addition, the proposed model has a lower complexity and faster reasoning speed, which is conducive to the implementation of computer-aided systems and provides referential solutions for relevant communities.



    加载中


    [1] R. K. Singh, R. Pandey, R. N. Babu, COVIDScreen: Explainable deep learning framework for differential diagnosis of COVID-19 using Chest X-Rays, Neural Comput. Appl., 33 (2021), 8871–8892. https://doi.org/10.1007/s00521-020-05636-6 doi: 10.1007/s00521-020-05636-6
    [2] C. Sohrabi, Z. Alsafi, N. Oneill, M. Khan, A. Kerwan, A. Al-jabir, et al., World Health Organization declares global emergency: A review of the 2019 novel coronavirus (COVID-19), Int. J. Surg., 76 (2020), 71–76. https://doi.org/10.1016/j.ijsu.2020.02.034 doi: 10.1016/j.ijsu.2020.02.034
    [3] J. Ma, Y. Wang, X. An, C. Ge, Z. Yu, J. Chen, et al., Toward data-efficient learning: A benchmark for COVID-19 CT lung and infection segmentation, Med. Phys., 48 (2021), 1197–1201. https://doi.org/10.1002/mp.14676 doi: 10.1002/mp.14676
    [4] C. Mattiuzzi, G. Lippi, COVID-19 vaccination is highly effective to prevent SARS-CoV-2 circulation, J. Infect. Public Health, 15 (2022), 395–396. https://doi.org/10.1016%2Fj.jiph.2022.03.006
    [5] G. M. Feuchtner, F. Barbieri, A. Luger, E. Skalla, J. Kountchev, G. Widmann, et al., Myocardial injury in COVID-19: The role of coronary computed tomography angiography (CTA), J. Cardiovasc. Comput. Tomogr., 15 (2021). https://doi.org/10.1016/j.jcct.2020.07.002 doi: 10.1016/j.jcct.2020.07.002
    [6] H. Okano, R. Furuya, S. Mishima, K. Shimada, S. Umeda, T. Michishita, et al., DUAL-energy computed tomography findings in a case of COVID-19, Acute Med. Surg., 8 (2021), e677. https://doi.org/10.1002/ams2.677 doi: 10.1002/ams2.677
    [7] D. C. Rotzinger, C. Beigelman-Aubry, C. Von Garnier, S. D. Qanadli, Pulmonary embolism in patients with COVID-19: time to change the paradigm of computed tomography, Thromb. Res., 190 (2020). https://doi.org/10.1016/j.thromres.2020.04.011 doi: 10.1016/j.thromres.2020.04.011
    [8] Y. Oh, S. Park, J. C. Ye, Deep learning COVID-19 features on cxr using limited training data sets, IEEE Trans. Med. Imaging, 39 (2020), 2688–2700. https://doi.org/10.1109/TMI.2020.2993291 doi: 10.1109/TMI.2020.2993291
    [9] Y. Peng, Y. Tang, S. Lee, Y. Zhu, R. M. Summers, Z. Lu, COVID-19-CT-CXR: a freely accessible and weakly labeled Chest X-Ray and CT image collection on COVID-19 from biomedical literature, IEEE Trans. Big Data, 7 (2020), 3–12. https://doi.org/10.1109/TBDATA.2020.3035935 doi: 10.1109/TBDATA.2020.3035935
    [10] W. Y. Chan, M. T. R. Hamid, N. F. M. Gowdh, K. Rahmat, N. A. Yaakup, C. Chai, Chest radiograph (CXR) manifestations of the novel coronavirus disease 2019 (COVID-19): A mini-review, Curr. Med. Imaging, 17 (2021), 677–685. https://doi.org/10.2174/1573405616666201231103312 doi: 10.2174/1573405616666201231103312
    [11] E. J. Hwang, H. Kim, S. H. Yoon, J. M. Goo, C. M. Park, Implementation of a deep learning-based computer-aided detection system for the interpretation of chest radiographs in patients suspected for COVID-19, Korean J. Radiol., 21 (2020), 1150. https://doi.org/10.3348%2Fkjr.2020.0536
    [12] M. Igi, M. Lieux, J. Park, C. Batte, B. Spieler, Coronavirus disease (COVID-19): The value of chest radiography for patients greater than age 50 years at an earlier timepoint of symptoms compared with younger patients, Ochsner J., 21 (2021), 126–132. https://doi.org/10.31486/toj.20.0102 doi: 10.31486/toj.20.0102
    [13] L. Cong, W. Feng, Z. Yao, X. Zhou, W. Xiao, Deep learning model as a new trend in computer-aided diagnosis of tumor pathology for lung cancer, J. Cancer, 11 (2020), 3615. https://doi.org/10.7150%2Fjca.43268
    [14] S. A. Agnes, J. Anitha, Appraisal of deep-learning techniques on computer-aided lung cancer diagnosis with computed tomography screening, J. Med. Phys., 45 (2020), 98. https://doi.org/10.4103%2Fjmp.JMP_101_19
    [15] S. Jaeger, A. Karargyris, S. Candemir, L. Folio, J. Siegelman, F. Callaghan, et al., Automatic tuberculosis screening using chest radiographs, IEEE Trans. Med. Imaging, 32 (2013), 233–245. https://doi.org/10.1109/TMI.2013.2284099 doi: 10.1109/TMI.2013.2284099
    [16] L. Hogeweg, C. Mol, P. A. Jong, R. Dawson, H. Ayles, B. v. Ginneken, Fusion of local and global detection systems to detect tuberculosis in chest radiographs, in International conference on medical image computing and computer-assisted intervention Springer, (2010), 250–257. https://doi.org/10.1007/978-3-642-15711-0_81
    [17] S. Candemir, S. Jaeger, K. Palaniappan, J. P. Musco, R. K. Singh, Z. Xue, et al., Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration, IEEE Trans. Med. Imaging, 32 (2013), 577–590. https://doi.org/10.1109/TMI.2013.2290491 doi: 10.1109/TMI.2013.2290491
    [18] I. E. Livieris, A. Kanavos, V. Tampakas, P. Pintelas, An ensemble SSL algorithm for efficient Chest X-Ray image classification, J. Imaging, 47 (2018), 95. https://doi.org/10.3390/jimaging4070095 doi: 10.3390/jimaging4070095
    [19] I. D. Apostolopoulos, T. A. Mpesiana, Covid-19: automatic detection from X-Ray images utilizing transfer learning with convolutional neural networks, Phys. Eng. Sci. Med., 43 (2020), 635–640. https://doi.org/10.1007/s13246-020-00865-4 doi: 10.1007/s13246-020-00865-4
    [20] A. Narin, C. Kaya, Z. Pamuk, Automatic detection of coronavirus disease (COVID-19) using X-Ray images and deep convolutional neural networks, Pattern Anal. Appl., 24 (2021), 1207–1220. https://doi.org/10.1007/s10044-021-00984-y doi: 10.1007/s10044-021-00984-y
    [21] L. Wang, Z. Q. Lin, A. Wong, Covid-net: A tailored deep convolutional neural network design for detection of COVID-19 cases from Chest X-Ray images, Sci. Rep., 10 (2020), 19549. https://doi.org/10.1038/s41598-020-76550-z doi: 10.1038/s41598-020-76550-z
    [22] A. A. Ardakani, A. R. Kanafi, U. R. Acharya, N. Khadem, A.Mohammadi, Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks, Comput. Biol. Med., 121 (2020), 103795. https://doi.org/10.1016/j.compbiomed.2020.103795 doi: 10.1016/j.compbiomed.2020.103795
    [23] D. Singh, V. Kumar, M. Kaur, Classification of COVID-19 patients from chest CT images using multi-objective differential evolution based convolutional neural networks, Eur. J. Clin. Microbiol. Infect. Dis., 39 (2020), 1379–1389. https://doi.org/10.1007/s10096-020-03901-z doi: 10.1007/s10096-020-03901-z
    [24] Y. Sun, B. Xue, M. Zhang, G. G. Yen, Evolving deep convolutional neural networks for image classification, IEEE Trans. Evol. Comput., 24 (2019), 394–407. https://doi.org/10.1109/TEVC.2019.2916183 doi: 10.1109/TEVC.2019.2916183
    [25] T. Mahmud, M. A. Rahman, S. A. Fattah, CovXNet: A multi-dilation convolutional neural network for automatic COVID-19 and other pneumonia detection from Chest X-Ray images with transferable multi-receptive feature optimization, Comput. Biol. Med., 122 (2020), 163869. https://doi.org/10.1016/j.compbiomed.2020.103869 doi: 10.1016/j.compbiomed.2020.103869
    [26] W. Shen, M. Zhou, F. Yang, C. Yang, J. Tian, Multi-scale convolutional neural networks for lung nodule classification, in International conference on information processing in medical imaging Springer, (2015), 588–599. https://doi.org/10.1007/978-3-319-19992-4_46
    [27] M. Irfan, M. A. Iftikhar, S. Yasin, U. Draz, T. Ali, S. Husaain, et al., Role of hybrid deep neural networks (HDNNs), computed tomography, and Chest X-Rays for the detection of COVID-19, Int. J. Environ. Res. Public Health, 18 (2021), 3056. https://doi.org/10.3390/ijerph18063056 doi: 10.3390/ijerph18063056
    [28] Y. E. Almalki, A. Qayyum, M. Irfan, N. Haider, A. Glowacz, F. M. Alshehri, et al., A novel method for COVID-19 diagnosis using artificial intelligence in Chest X-Ray images, Healthcare, 9 (2021), 522. https://doi.org/10.3390/healthcare9050522 doi: 10.3390/healthcare9050522
    [29] P. Rajpurkar, J. Irvin, K. Zhu, B. Yang, H. Mehta, T. Duan, et al., Chexnet: Radiologist-level pneumonia detection on Chest X-Rays with deep learning, preprint, arXiv: 1711.05225.
    [30] F. Ucar, D. Korkmaz, COVIDiagnosis-Net: Deep Bayes-SqueezeNet based diagnosis of the coronavirus disease 2019 (COVID-19) from X-Ray images, Med. Hypotheses, 140 (2020), 1207–109761. https://doi.org/10.1016/j.mehy.2020.109761 doi: 10.1016/j.mehy.2020.109761
    [31] X. Jiang, Y. Zhu, B. Zheng, D. Yang, Images denoising for COVID-19 chest X-Ray based on multiresolution parallel residual CNN, Mach. Vision Appl., 32 (2021), 1–15. https://doi.org/10.1007/s00138-021-01224-3 doi: 10.1007/s00138-021-01224-3
    [32] A. Waheed, M. Goyal, D. Gupta, A. Khanna, F. Al-Turjman, P. R. Pinheiro, Covidgan: data augmentation using auxiliary classifier gan for improved COVID-19 detection, IEEE Access, 8 (2020), 91916–91923. https://doi.org/10.1109/ACCESS.2020.2994762 doi: 10.1109/ACCESS.2020.2994762
    [33] A. K. Jaiswal, P. Tiwari, S. Kumar, D. Gupta, A. Khanna, J. J. P. C. Rodriguese, Identifying pneumonia in chest X-Rays: a deep learning approach, Med. Hypotheses, 145 (2019), 511–518. https://doi.org/10.1016/j.measurement.2019.05.076 doi: 10.1016/j.measurement.2019.05.076
    [34] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2016), 770–788.
    [35] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv: 1409.1556.
    [36] S. Zhang, L. Wen, X. Bian, Z. Lei, S. Z. Li, Single-shot refinement neural network for object detection, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2018), 4203–4212.
    [37] Q. Wang, B. Wu, P. Zhu, P. Li, W. Zuo, Q. Hu, ECA-Net: efficient channel attention for deep convolutional neural networks, in IEEE Conference on Computer Vision and Pattern Recognition, (2020).
    [38] S. Ioffe, C. Szegedy, Batch normalization: accelerating deep network training by reducing internal covariate shift, in Proceedings of the International Conference on Machine Learning, (2015).
    [39] X. Glorot, A. Bordes, Y. Bengio, Deep sparse rectifier neural networks, in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, (2011), 315–323.
    [40] G. Huang, Z. Liu, L. Maaten, K. Q. Weinberger, Densely connected convolutional networks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2017), 4700–4708.
    [41] T. Y. Lin, P. Dollr, R. Girshick, K. He, B. Hariharan, S. Belongie, Feature pyramid networks for object detection, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2017), 2117–2125.
    [42] F. Yu, D. Wang, E. Shelhamer, T. Darrell, Deep layer aggregation, in Proceedings of the IEEE Conference on Computer vision and Pattern Recognition, (2018), 2403–2412.
    [43] J. Park, S. Woo, J. Y. Lee, I. S. Kweonet, Bam: Bottleneck attention module, preprint, arXiv: 1807.06514.
    [44] S. Woo, J. Park, J. Y. Lee, I. S. Kweonet, Cbam: Convolutional block attention module, in Proceedings of the European conference on computer vision (ECCV), (2018), 3–19.
    [45] H. Q. Nguyen, K. Lam, L. T. Le, H. H. Pham, D. Q. Tran, D. B. Nguyen, et al., VinDr-CXR: An open dataset of chest X-Rays with radiologist's annotations, Sci. Data, 9 (2022), 429. https://doi.org/10.1038/s41597-022-01498-w doi: 10.1038/s41597-022-01498-w
    [46] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, et al., PyTorch: An imperative style, high-performance deep learning library, Adv. Neural Inf. Process. Syst., 32 (2019).
    [47] R. Zhu, S. Zhang, X. Wang, L. Wen, H. Shi, L. Bo, et al., ScratchDet: Training single-shot object detectors from scratch, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2019), 2268–2277.
    [48] M. Everingham, L. V. Gool, C. K. I. Williams, J. Winn, A. Zisserman, The pascal visual object classes (voc) challenge, Int. J. Comput. Vision, 88 (2010), 303–338. https://doi.org/10.1007/s11263-009-0275-4 doi: 10.1007/s11263-009-0275-4
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2288) PDF downloads(163) Cited by(3)

Article outline

Figures and Tables

Figures(10)  /  Tables(5)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog