Research article Special Issues

A fast matrix completion method based on truncated $ {\mathit{L}}_{2, 1} $ norm minimization

  • Received: 05 January 2024 Revised: 08 February 2024 Accepted: 29 February 2024 Published: 07 March 2024
  • In recent years, a truncated nuclear norm regularization (TNNR) method has obtained much attention from researchers in machine learning and image processing areas, because it is much more accurate on matrices with missing data than other traditional methods based on nuclear norm. However, the TNNR method is reported to be very slow, due to its large number of singular value decomposition (SVD) iterations. In this paper, a truncated $ {\boldsymbol{L}}_\bf{2, 1} $ norm minimization method was presented for fast and accurate matrix completion, which is abbreviated as TLNM. In the proposed TLNM method, the truncated nuclear norm minimization model of TNNR was improved to a truncated $ {\boldsymbol{L}}_\bf{2, 1} $ norm minimization model that aimed to optimize the truncated $ {\boldsymbol{L}}_\bf{2, 1} $ Norm and a weighted noisy matrix simultaneously for improving the accuracy of TLNM. Using Qatar Riyal (QR) decomposition to calculate the orthogonal bases for reconstructing recovery results, the proposed TLNM method is much faster than the TNNR method. Adequate results for color images validate the effectiveness and efficiency of TLNM comparing with TNNR and other competing methods.

    Citation: Zhengyu Liu, Yufei Bao, Changhai Wang, Xiaoxiao Chen, Qing Liu. A fast matrix completion method based on truncated $ {\mathit{L}}_{2, 1} $ norm minimization[J]. Electronic Research Archive, 2024, 32(3): 2099-2119. doi: 10.3934/era.2024095

    Related Papers:

  • In recent years, a truncated nuclear norm regularization (TNNR) method has obtained much attention from researchers in machine learning and image processing areas, because it is much more accurate on matrices with missing data than other traditional methods based on nuclear norm. However, the TNNR method is reported to be very slow, due to its large number of singular value decomposition (SVD) iterations. In this paper, a truncated $ {\boldsymbol{L}}_\bf{2, 1} $ norm minimization method was presented for fast and accurate matrix completion, which is abbreviated as TLNM. In the proposed TLNM method, the truncated nuclear norm minimization model of TNNR was improved to a truncated $ {\boldsymbol{L}}_\bf{2, 1} $ norm minimization model that aimed to optimize the truncated $ {\boldsymbol{L}}_\bf{2, 1} $ Norm and a weighted noisy matrix simultaneously for improving the accuracy of TLNM. Using Qatar Riyal (QR) decomposition to calculate the orthogonal bases for reconstructing recovery results, the proposed TLNM method is much faster than the TNNR method. Adequate results for color images validate the effectiveness and efficiency of TLNM comparing with TNNR and other competing methods.



    加载中


    [1] J. Miao, K. I. Kou, Color image recovery using low-rank quaternion matrix completion algorithm, IEEE Trans. Image Process., 31 (2022), 190–201. https://doi.org/10.1109/TIP.2021.3128321 doi: 10.1109/TIP.2021.3128321
    [2] Z. Jia, Q. Jin, M. K. Ng, X. Zhao, Non-local robust quaternion matrix completion for large-scale color image and video inpainting, IEEE Trans. Image Process., 31 (2022), 3868–3883. https://doi.org/10.1109/TIP.2022.3176133 doi: 10.1109/TIP.2022.3176133
    [3] X. Li, H. Zhang, R. Zhang, Matrix completion via non-convex relaxation and adaptive correlation learning, IEEE Trans. Pattern Anal. Mach. Intell., 45 (2023), 1981–1991. https://doi.org/10.1109/TPAMI.2022.3157083 doi: 10.1109/TPAMI.2022.3157083
    [4] H. Cai, L. Huang, P. Li, D. Needell, Matrix completion with cross-concentrated sampling: Bridging uniform sampling and CUR sampling, IEEE Trans. Pattern Anal. Mach. Intell., 45 (2023), 10100–10113. https://doi.org/10.1109/TPAMI.2023.3261185 doi: 10.1109/TPAMI.2023.3261185
    [5] S. Bhattacharya, S. Chatterjee, Matrix completion with data-dependent missingness probabilities, IEEE Trans. Inf. Theory, 68 (2022), 6762–6773. https://doi.org/10.1109/TIT.2022.3170244 doi: 10.1109/TIT.2022.3170244
    [6] Y. Yang, C. Ma, Optimal tuning-free convex relaxation for noisy matrix completion, IEEE Trans. Inf. Theory, 69 (2023), 6571–6585. https://doi.org/10.1109/TIT.2023.3284341 doi: 10.1109/TIT.2023.3284341
    [7] K. F. Masood, J. Tong, J. Xi, J. Yuan, Y. Yu, Inductive matrix completion and root-MUSIC-based channel estimation for intelligent reflecting surface (IRS)-aided hybrid MIMO systems, IEEE Trans. Wireless Commun., 22 (2023), 7917–7931. https://doi.org/10.1109/TWC.2023.3257138 doi: 10.1109/TWC.2023.3257138
    [8] O. Elnahas, Y. Ma, Y. Jiang, Z. Quan, Clock synchronization in wireless networks using matrix completion-based maximum likelihood estimation, IEEE Trans. Wireless Commun., 19 (2020), 8220–8231. https://doi.org/10.1109/TWC.2020.3020191 doi: 10.1109/TWC.2020.3020191
    [9] Q. Liu, X. Li, H. Cao, Two-dimensional localization: Low-rank matrix completion with random sampling in massive MIMO system, IEEE Syst. J., 15 (2020), 3628–3631. https://doi.org/10.1109/JSYST.2020.3012775 doi: 10.1109/JSYST.2020.3012775
    [10] Z. Liu, X. Li, H. C. So, ℓ0-norm minimization based robust matrix completion approach for MIMO radar target localization, IEEE Trans. Aerosp. Electronic Syst., 59 (2023), 6759–6770. https://doi.org/10.1109/TAES.2023.3280462 doi: 10.1109/TAES.2023.3280462
    [11] M. Fazel, Matrix Rank Minimization with Applications, Ph.D thesis, Stanford University, 2002.
    [12] E. J. Candès, B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math., 9 (2009), 717–772. https://doi.org/10.1007/s10208-009-9045-5 doi: 10.1007/s10208-009-9045-5
    [13] J. Cai, E. Candès, Z. Shen, A singular value thresholding method for matrix completion, SIAM J. Optim., 20 (2010), 1956–1982. https://doi.org/10.1137/080738970 doi: 10.1137/080738970
    [14] S. Ma, D. Goldfarb, L. Chen, Fixed point and Bregman iterative methods for matrix rank minimization, Math. Program., 128 (2011), 321–353. https://doi.org/10.1007/s10107-009-0306-5 doi: 10.1007/s10107-009-0306-5
    [15] K. C. Toh, S. Yun, An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems, Pacific J. Optim., 6 (2010), 615–640.
    [16] R. Meka, P. Jain, I. Dhillon, Guaranteed rank minimization via singular value projection, in Proceedings of Advances in Neural Information Processing Systems, (2009), 937–945.
    [17] Y. Hu, D. Zhang, J. Ye, X. Li, X. He, Fast and accurate matrix completion via truncated nuclear norm regularization, IEEE Trans. Pattern Anal. Mach. Intell., 35 (2013), 2117–2130. https://doi.org/10.1109/TPAMI.2012.271 doi: 10.1109/TPAMI.2012.271
    [18] Y. Liu, L. Jiao, F. Shang, F. Yin, F. Liu, An efficient matrix bi-factorization alternative optimization method for low-rank matrix recovery and completion, Neural Networks, 48 (2013), 8–18. https://doi.org/10.1016/j.neunet.2013.06.013 doi: 10.1016/j.neunet.2013.06.013
    [19] Y. Liu, L. Jiao, F. Shang, A fast tri-factorization method for low-rank matrix recovery and completion, Pattern Recognit., 46 (2013), 163–173. https://doi.org/10.1016/j.patcog.2012.07.003 doi: 10.1016/j.patcog.2012.07.003
    [20] Z. Wen, W. Yin, Y. Zhang, Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation, Math. Program. Comput., 4 (2012), 333–361. https://doi.org/10.1007/s12532-012-0044-1 doi: 10.1007/s12532-012-0044-1
    [21] S. Gu, L. Zhang, W. Zuo, X. Feng, Weighted nuclear norm minimization with application to image denoising, in Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition, (2014).
    [22] C. Wen, W. Qian, Q. Zhang, F. Cao, Algorithms of matrix recovery based on truncated Schatten p-norm, Int. J. Mach. Learn. Cybern., 12 (2021), 1557–1570. https://doi.org/10.1007/s13042-020-01256-7 doi: 10.1007/s13042-020-01256-7
    [23] G. Li, G. Guo, S. Peng, C. Wang, S. Yu, J. Niu, et al., Matrix completion via Schatten capped p norm, IEEE Trans. Knowl. Data Eng., 34 (2022), 394–404. https://doi.org/10.1109/TKDE.2020.2978465 doi: 10.1109/TKDE.2020.2978465
    [24] Q. Liu, Z. Lai, Z. Zhou, F. Kuang, Z. Jin, A truncated nuclear norm regularization method based on weighted residual error for matrix completion, IEEE Trans. Image Process., 25 (2016), 316–330. https://doi.org/10.1109/TIP.2015.2503238 doi: 10.1109/TIP.2015.2503238
    [25] W. Zeng, H. C. So, Outlier-robust matrix completion via ℓp-minimization, IEEE Trans. Signal Process., 66 (2018), 1125–1140. https://doi.org/10.1109/TSP.2017.2784361 doi: 10.1109/TSP.2017.2784361
    [26] X. P. Li, Q. Liu, H. C. So, Rank-one matrix approximation with ℓp-norm for image inpainting, IEEE Signal Process. Lett., 27 (2020), 680–684. https://doi.org/10.1109/LSP.2020.2988596 doi: 10.1109/LSP.2020.2988596
    [27] M. Muma, W. Zeng, A. M. Zoubir, Robust M-estimation based matrix completion, in ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (2019), 5476–5480. https://doi.org/10.1109/ICASSP.2019.8682657
    [28] Y. He, F. Wang, Y. Li, J. Qin, B. Chen, Robust matrix completion via maximum correntropy criterion and half-quadratic optimization, IEEE Trans. Signal Process., 68 (2019), 181–195. https://doi.org/10.1109/TSP.2019.2952057 doi: 10.1109/TSP.2019.2952057
    [29] Z. Shi, X. P. Li, W. Li, T. Yan, J. Wang, Y. Fu, Robust low-rank matrix recovery as mixed integer programming via ℓ0-norm optimization, IEEE Signal Process. Lett., 30 (2023), 1012–1016. https://doi.org/10.1109/LSP.2023.3301244 doi: 10.1109/LSP.2023.3301244
    [30] Q. Zhao, D. Meng, Z. Xu, W. Zuo, Y. Yan, L1-norm low rank matrix factorization by variational Bayesian method, IEEE Trans. Neural Network Learn. Syst., 26 (2015), 825–839. https://doi.org/10.1109/TNNLS.2014.2387376 doi: 10.1109/TNNLS.2014.2387376
    [31] X. P. Li, Z. Shi, Q. Liu, H. C. So, Fast robust matrix completion via entry-wise ℓ2-norm minimization, IEEE Trans. Cybern., 53 (2023), 7199–7212. https://doi.org/10.1109/TCYB.2022.3224070 doi: 10.1109/TCYB.2022.3224070
    [32] Q. Liu, F. Davoine, J. Yang, Y. Cui, Z. Jin, F. Han, A fast and accurate matrix completion method based on QR decomposition and L2, 1-norm minimization, IEEE Trans. Neural Networks Learn. Syst., 30 (2019), 803–817. https://doi.org/10.1109/TNNLS.2018.2851957
    [33] Q. Liu, C. Peng, P. Yang, X. Zhou, Z. Liu, A fast matrix completion method based on matrix bifactorization and QR decomposition, Wireless Commun. Mobile Comput., 2023 (2023), 2117876. https://doi.org/10.1155/2023/2117876 doi: 10.1155/2023/2117876
    [34] M. Yang, Y. Li, J. Wang, Feature and nuclear norm minimization for matrix completion, IEEE Trans. Knowl. Data Eng., 34 (2022), 2190–2199. https://doi.org/10.1109/TKDE.2020.3005978 doi: 10.1109/TKDE.2020.3005978
    [35] Q. Fan, Y. Liu, T. Yang, H. Peng, Fast and accurate spectrum estimation via virtual co-array interpolation based on truncated nuclear norm regularization, IEEE Signal Process. Lett., 29 (2021), 169–173. https://doi.org/10.1109/LSP.2021.3130018 doi: 10.1109/LSP.2021.3130018
    [36] G. Morison, SURE based truncated tensor nuclear norm regularization for low rank tensor completion, in Proceedings of the 28th European Signal Processing Conference (EUSIPCO) in 2020, (2021), 2001–2005. https://doi.org/10.23919/Eusipco47968.2020.9287726
    [37] J. Dong, Z. Xue, J. Guan, Z. F. Han, W. Wang, Low rank matrix completion using truncated nuclear norm and sparse regularizer, Signal Process. Image Commun., 68 (2018), 76–87. https://doi.org/10.1016/j.image.2018.06.007 doi: 10.1016/j.image.2018.06.007
    [38] M. Zhang, M. Zhang, F. Zhao, F. Zhang, Y. Liu, A. Evans, Truncated weighted nuclear norm regularization and sparsity for image denoising, in Proceedings of 2023 IEEE International Conference on Image Processing (ICIP), (2023), 1825–1829. https://doi.org/10.1109/ICIP49359.2023.10221971
    [39] Y. Song, J. Li, X. Chen, D. Zhang, Q. Tang, K. Yang, An efficient tensor completion method via truncated nuclear norm, J. Visual Commun. Image Representation, 70 (2020), 102791. https://doi.org/10.1016/j.jvcir.2020.102791 doi: 10.1016/j.jvcir.2020.102791
    [40] Y. Qiu, G. Zhou, J. Zeng, Q. Zhao, S. Xie, Imbalanced low-rank tensor completion via latent matrix factorization, Neural Networks, 155 (2022), 369. https://doi.org/10.1016/j.neunet.2022.08.023 doi: 10.1016/j.neunet.2022.08.023
    [41] Z. Hu, F. Nie, R. Wang, X. Li, Low rank regularization: A review, Neural Networks, 136 (2021), 218. https://doi.org/10.1016/j.neunet.2020.09.021 doi: 10.1016/j.neunet.2020.09.021
    [42] Q. Shen, S. Yi, Y. Liang, Y. Chen, W. Liu, Bilateral fast low-rank representation with equivalent transformation for subspace clustering, IEEE Trans. Multimedia, 25 (2023), 6371–6383. https://doi.org/10.1109/TMM.2022.3207922 doi: 10.1109/TMM.2022.3207922
    [43] Y. Yang, J. Zhang, S. Song, C. Zhang, D. Liu, Low-rank and sparse matrix decomposition with orthogonal subspace projection-based background suppression for hyperspectral anomaly detection, IEEE Geosci. Remote Sens. Lett., 17 (2020), 1378–1382. https://doi.org/10.1109/LGRS.2019.2948675 doi: 10.1109/LGRS.2019.2948675
    [44] X. Wang, X. P. Li, H. C. So, Truncated quadratic norm minimization for bilinear factorization-based matrix completion, Signal Process., 214 (2024), 109219. https://doi.org/10.1016/j.sigpro.2023.109219 doi: 10.1016/j.sigpro.2023.109219
    [45] J. Wen, H. He, Z. He, F. Zhu, A pseudo-inverse-based hard thresholding algorithm for sparse signal recovery, IEEE Trans. Intell. Transp. Syst., 24 (2023), 7621–7630. https://doi.org/10.1109/TITS.2022.3172868 doi: 10.1109/TITS.2022.3172868
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(373) PDF downloads(63) Cited by(0)

Article outline

Figures and Tables

Figures(7)  /  Tables(3)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog