Research article

Enhanced spectral attention and adaptive spatial learning guided network for hyperspectral and LiDAR classification

  • Received: 05 March 2024 Revised: 06 June 2024 Accepted: 25 June 2024 Published: 03 July 2024
  • Although the data fusion of hyperspectral images (HSI) and light detection and ranging (LiDAR) has provided significant gains for land-cover classification, it also brings technical obstacles (i.e., it is difficult to capture discriminative local and global spatial-spectral from redundant data and build interactions between heterogeneous data). In this paper, a classification network named enhanced spectral attention and adaptive spatial learning guided network (ESASNet) is proposed for the joint use of HSI and LiDAR. Specifically, first, by combining a convolutional neural network (CNN) with the transformer, adaptive spatial learning (ASL) and enhanced spectral learning (ESL) are proposed to learn the spectral-spatial features from the HSI data and the elevation features from the LiDAR data in the local and global receptive field. Second, considering the characteristics of HSI with a continuous, narrowband spectrum, ESL is designed by adding enhanced local self-attention to enhance the mining of the spectral correlations across the adjacent spectrum. Finally, a feature fusion module is proposed to ensure an efficient information exchange between HSI and LiDAR during spectral features and spatial feature fusion. Experimental evaluations on the HSI-LiDAR dataset clearly illustrate that ESASNet performs better in feature extraction than the state-of-the-art methods. The code is available at https://github.com/AirsterMode/ESASNet.

    Citation: Bingsheng Li, Na Li, Jianmin Ren, Xupeng Guo, Chao Liu, Hao Wang, Qingwu Li. Enhanced spectral attention and adaptive spatial learning guided network for hyperspectral and LiDAR classification[J]. Electronic Research Archive, 2024, 32(7): 4218-4236. doi: 10.3934/era.2024190

    Related Papers:

  • Although the data fusion of hyperspectral images (HSI) and light detection and ranging (LiDAR) has provided significant gains for land-cover classification, it also brings technical obstacles (i.e., it is difficult to capture discriminative local and global spatial-spectral from redundant data and build interactions between heterogeneous data). In this paper, a classification network named enhanced spectral attention and adaptive spatial learning guided network (ESASNet) is proposed for the joint use of HSI and LiDAR. Specifically, first, by combining a convolutional neural network (CNN) with the transformer, adaptive spatial learning (ASL) and enhanced spectral learning (ESL) are proposed to learn the spectral-spatial features from the HSI data and the elevation features from the LiDAR data in the local and global receptive field. Second, considering the characteristics of HSI with a continuous, narrowband spectrum, ESL is designed by adding enhanced local self-attention to enhance the mining of the spectral correlations across the adjacent spectrum. Finally, a feature fusion module is proposed to ensure an efficient information exchange between HSI and LiDAR during spectral features and spatial feature fusion. Experimental evaluations on the HSI-LiDAR dataset clearly illustrate that ESASNet performs better in feature extraction than the state-of-the-art methods. The code is available at https://github.com/AirsterMode/ESASNet.


    加载中


    [1] J. Mäyrä, S. Keski-Saari, S. Kivinen, T. Tanhuanpää, P. Hurskainen, P. Kullberg, et al., Tree species classification from airborne hyperspectral and LiDAR data using 3D convolutional neural networks, Remote Sens. Environ., 256 (2021), 112322. https://doi.org/10.1016/j.rse.2021.112322 doi: 10.1016/j.rse.2021.112322
    [2] C. T. de Almeida, L. S. Galvao, J. P. H. B. Ometto, A. D. Jacon, F. R. de Souza Pereira, L. Y. Sato, et al., Combining LiDAR and hyperspectral data for aboveground biomass modeling in the Brazilian Amazon using different regression algorithms, Remote Sens. Environ., 232 (2019), 111323. https://doi.org/10.1016/j.rse.2019.111323 doi: 10.1016/j.rse.2019.111323
    [3] M. R. Soosai, Y. C. Joshya, R. S. Kumar, I. G. Moorthy, S. Karthikumar, N. T. L. Chi, et al., Versatile image processing technique for fuel science: A review, Sci. Total Environ., 780 (2021), 146469. https://doi.org/10.1016/j.scitotenv.2021.146469 doi: 10.1016/j.scitotenv.2021.146469
    [4] Y. Gu, Q. Wang, X. Jia, J. A. Benediktsson, A novel MKL model of integrating LiDAR data and MSI for urban area classification, IEEE Trans. Geosci. Remote Sens., 53 (2015), 5312–5326. https://doi.org/10.1109/TGRS.2015.2421051 doi: 10.1109/TGRS.2015.2421051
    [5] J. J. Lewis, R. J. O'Callaghan, S. G. Nikolov, D. R. Bull, N. Canagarajah, Pixel-and region-based image fusion with complex wavelets, Inf. Fusion, 8 (2007), 119–130. https://doi.org/10.1016/j.inffus.2005.09.006 doi: 10.1016/j.inffus.2005.09.006
    [6] Y. Liu, X. Chen, Z. Wang, Z. J. Wang, R. K. Ward, X. Wang, Deep learning for pixel-level image fusion: Recent advances and future prospects, Inf. Fusion, 42 (2018), 158–173. https://doi.org/10.1016/j.inffus.2017.10.007 doi: 10.1016/j.inffus.2017.10.007
    [7] S. Li, X. Kang, L. Fang, H. Yin, Pixel-level image fusion: A survey of the state of the art, Inf. Fusion, 33 (2017), 100–112. https://doi.org/10.1016/j.inffus.2016.05.004 doi: 10.1016/j.inffus.2016.05.004
    [8] Y. Tong, Y. Quan, W. Feng, G. Dauphin, Y. Wang, P. Wu, et al., Multi-scale feature extraction and total variation based fusion method for HSI and lidar data classification, in 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, IEEE, Brussels, Belgium, (2021), 5433–5436. https://doi.org/10.1109/IGARSS47720.2021.9554337
    [9] R. Luo, W. Liao, H. Zhang, Y. Pi, W. Philips, Classification of cloudy hyperspectral image and LiDAR data based on feature fusion and decision fusion, in 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), IEEE, Beijing, China, (2016), 2518–2521. https://doi.org/10.1109/IGARSS.2016.7729650
    [10] X. Xu, W. Li, Q. Ran, Q. Du, L.Gao, B. Zhang, Multisource remote sensing data classification based on convolutional neural network, IEEE Trans. Geosci. Remote Sens., 56 (2017), 937–949, https://doi.org/10.1109/TGRS.2017.2756851 doi: 10.1109/TGRS.2017.2756851
    [11] H. Li, P. Ghamisi, U. Soergel, X. X. Zhu, Hyperspectral and LiDAR fusion using deep three-stream convolutional neural networks, Remote Sens., 10 (2018), 1649. https://doi.org/10.3390/rs10101649 doi: 10.3390/rs10101649
    [12] C. Ge, Q. Du, W. Sun, K. Wang, J. Li, Y. Li, Deep residual network-based fusion framework for hyperspectral and LiDAR data, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 14 (2021), 2458–2472. https://doi.org/10.1109/JSTARS.2021.3054392 doi: 10.1109/JSTARS.2021.3054392
    [13] X. Zhao, R. Tao, W. Li, H. C. Li, Q. Du, W. Liao, et al., Joint classification of hyperspectral and LiDAR data using hierarchical random walk and deep CNN architecture, IEEE Trans. Geosci. Remote Sens., 58 (2020), 7355–7370. https://doi.org/10.1109/TGRS.2020.2982064 doi: 10.1109/TGRS.2020.2982064
    [14] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, et al., An image is worth 16 × 16 words: Transformers for image recognition at scale, preprint, arXiv: 2010.11929. https://doi.org/10.48550/arXiv.2010.11929
    [15] Y. Yu, T. Jiang, J. Gao, H. Guan, D. Li, S. Gao, et al., CapViT: Cross-context capsule vision transformers for land cover classification with airborne multispectral LiDAR data, Int. J. Appl. Earth Obs. Geoinf., 111 (2022), 102837. https://doi.org/10.1016/j.jag.2022.102837 doi: 10.1016/j.jag.2022.102837
    [16] S. K. Roy, A. Deria, D. Hong, B. Rasti, A. Plaza, J. Chanussot, Multimodal fusion transformer for remote sensing image classification, IEEE Trans. Geosci. Remote Sens., 61 (2020), 1–20. https://doi.org/10.1109/TGRS.2023.3286826 doi: 10.1109/TGRS.2023.3286826
    [17] Y. Feng, J. Zhu, R Song, X. Wang, S2EFT: Spectral-spatial-elevation fusion transformer for hyperspectral image and LiDAR classification, Knowledge-Based Syst., 283 (2024), 111190. https://doi.org/10.1016/j.knosys.2023.111190 doi: 10.1016/j.knosys.2023.111190
    [18] G. Zhao, Q. Ye, L. Sun, Z. Wu, C. Pan, B. Jeon, Joint classification of hyperspectral and LiDAR data using a hierarchical CNN and transformer, IEEE Trans. Geosci. Remote Sens., 61 (2023), 1–16. https://doi.org/10.1109/TGRS.2022.3232498 doi: 10.1109/TGRS.2022.3232498
    [19] X. Wang, Y. Feng, R. Song, Z. Mu, C. Song, Multi-attentive hierarchical dense fusion net for fusion classification of hyperspectral and LiDAR data, Inf. Fusion, 82 (2022), 1–18. https://doi.org/10.1016/j.inffus.2021.12.008 doi: 10.1016/j.inffus.2021.12.008
    [20] J. Wang, J. Li, Y. Shi, J. Lai, X. Tan, AM³Net: Adaptive mutual-learning-based multimodal data fusion network, IEEE Trans. Circuits Syst. Video Technol., 32 (2022), 5411–5426. https://doi.org/10.1109/TCSVT.2022.3148257 doi: 10.1109/TCSVT.2022.3148257
    [21] S. Mohla, S. Pande, B. Banerjee, S. Chaudhuri, Fusatnet: Dual attention based spectrospatial multimodal fusion network for hyperspectral and lidar classification, in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE, Seattle, WA, USA, (2020), 416–425, https://doi.org/10.1109/CVPRW50498.2020.00054
    [22] J. Zhou, P. Wang, F. Wang, Q. Liu, H. Li, R. Jin, Elsa: Enhanced local self-attention for vision transformer, preprint, arXiv: 2112.12786, https://doi.org: 10.48550/arXiv.2112.12786
    [23] M. Khodadadzadeh, J. Li, S. Prasad, A. Plaza, Fusion of hyperspectral and LiDAR remote sensing data using multiple feature learning, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 8 (2015), 2971–2983. https://doi.org/10.1109/JSTARS.2015.2432037 doi: 10.1109/JSTARS.2015.2432037
    [24] B. Rasti, P. Ghamisi, R. Gloaguen, Hyperspectral and LiDAR fusion using extinction profiles and total variation component analysis, IEEE Trans. Geosci. Remote Sens., 55 (2017), 3997–4007. https://doi.org/10.1109/TGRS.2017.2686450 doi: 10.1109/TGRS.2017.2686450
    [25] D. Hong, J. Hu, J. Yao, J. Chanussot, X. X. Zhu, Multimodal remote sensing benchmark datasets for land cover classification with a shared and specific feature learning model, ISPRS J. Photogramm. Remote Sens., 178 (2021), 68–80. https://doi.org/10.1016/j.isprsjprs.2021.05.011 doi: 10.1016/j.isprsjprs.2021.05.011
    [26] X. Xu, W. Li, Q. Ran, Q. Du, L. Gao, B. Zhang, Multisource remote sensing data classification based on convolutional neural network, IEEE Trans. Geosci. Remote Sens., 56 (2017), 937–949. https://doi.org/10.1109/TGRS.2017.2756851 doi: 10.1109/TGRS.2017.2756851
    [27] D. Hong, L. Gao, R. Hang, B. Zhang, J. Chanussot, Deep encoder–decoder networks for classification of hyperspectral and LiDAR data, IEEE Geosci. Remote Sens. Lett., 19 (2020), 1–5. https://doi.org/10.1109/LGRS.2020.3017414 doi: 10.1109/LGRS.2020.3017414
    [28] D. Hong, L. Gao, N. Yokoya, J. Yao, J. Chanussot, Q. Du, et al., More diverse means better: Multimodal deep learning meets remote-sensing imagery classification, IEEE Trans. Geosci. Remote Sens., 59 (2021), 4340–4354. https://doi.org/10.1109/TGRS.2020.3016820 doi: 10.1109/TGRS.2020.3016820
    [29] S. Fang, K. Li, Z. Li, S2ENet: Spatial-spectral cross-modal enhancement network for classification of hyperspectral and LiDAR data, IEEE Geosci. Remote Sens. Lett., 19 (2021), 1–5. https://doi.org/10.1109/LGRS.2021.3121028 doi: 10.1109/LGRS.2021.3121028
    [30] J. Cai, M. Zhang, H. Yang, Y. He, Y. Yang, C. Shi, et al. A novel graph-attention based multimodal fusion network for joint classification of hyperspectral image and LiDAR data, Expert Syst. Appl., 249 (2024), 123587. https://doi.org/10.1016/j.eswa.2024.123587 doi: 10.1016/j.eswa.2024.123587
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(584) PDF downloads(54) Cited by(0)

Article outline

Figures and Tables

Figures(8)  /  Tables(6)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog