Loading [MathJax]/jax/output/SVG/jax.js
Research article

A hybrid singular value thresholding algorithm with diagonal-modify for low-rank matrix recovery

  • Received: 21 September 2024 Revised: 21 October 2024 Accepted: 24 October 2024 Published: 04 November 2024
  • In this paper, a new hybrid singular value thresholding with diagonal-modify algorithm based on the augmented Lagrange multiplier (ALM) method was proposed for low-rank matrix recovery, in which only part singular values were treated by a hybrid threshold operator with diagonal-update, and which allowed the algorithm to make use of simple arithmetic operation and keep the computational cost of each iteration low. The new algorithm decreased the complexity of the singular value decomposition and shortened the computing time. The convergence of the new algorithm was discussed. Finally, numerical experiments shown that the new algorithm greatly improved the solving efficiency of a matrix recovery problem and saved the calculation cost, and its effect was obviously better than that of the other algorithms mentioned in experiments.

    Citation: Ruiping Wen, Liang Zhang, Yalei Pei. A hybrid singular value thresholding algorithm with diagonal-modify for low-rank matrix recovery[J]. Electronic Research Archive, 2024, 32(11): 5926-5942. doi: 10.3934/era.2024274

    Related Papers:

    [1] Yang Song, Beiyan Yang, Jimin Wang . Stability analysis and security control of nonlinear singular semi-Markov jump systems. Electronic Research Archive, 2025, 33(1): 1-25. doi: 10.3934/era.2025001
    [2] Ruini Zhao . Nanocrystalline SEM image restoration based on fractional-order TV and nuclear norm. Electronic Research Archive, 2024, 32(8): 4954-4968. doi: 10.3934/era.2024228
    [3] Natália Bebiano, João da Providência, Wei-Ru Xu . Approximations for the von Neumann and Rényi entropies of graphs with circulant type Laplacians. Electronic Research Archive, 2022, 30(5): 1864-1880. doi: 10.3934/era.2022094
    [4] Jia-Min Luo, Hou-Biao Li, Wei-Bo Wei . Block splitting preconditioner for time-space fractional diffusion equations. Electronic Research Archive, 2022, 30(3): 780-797. doi: 10.3934/era.2022041
    [5] Guanrong Li, Yanping Chen, Yunqing Huang . A hybridized weak Galerkin finite element scheme for general second-order elliptic problems. Electronic Research Archive, 2020, 28(2): 821-836. doi: 10.3934/era.2020042
    [6] Zhengyu Liu, Yufei Bao, Changhai Wang, Xiaoxiao Chen, Qing Liu . A fast matrix completion method based on truncated $ {\mathit{L}}_{2, 1} $ norm minimization. Electronic Research Archive, 2024, 32(3): 2099-2119. doi: 10.3934/era.2024095
    [7] Yingpin Chen, Kaiwei Chen . Four mathematical modeling forms for correlation filter object tracking algorithms and the fast calculation for the filter. Electronic Research Archive, 2024, 32(7): 4684-4714. doi: 10.3934/era.2024213
    [8] Yongge Tian . Characterizations of matrix equalities involving the sums and products of multiple matrices and their generalized inverse. Electronic Research Archive, 2023, 31(9): 5866-5893. doi: 10.3934/era.2023298
    [9] Yan Xia, Songhua Wang . Global convergence in a modified RMIL-type conjugate gradient algorithm for nonlinear systems of equations and signal recovery. Electronic Research Archive, 2024, 32(11): 6153-6174. doi: 10.3934/era.2024286
    [10] Tianbao Liu, Lingling Yang, Yue Li, Xiwen Qin . An improved dung beetle optimizer based on Padé approximation strategy for global optimization and feature selection. Electronic Research Archive, 2025, 33(3): 1693-1762. doi: 10.3934/era.2025079
  • In this paper, a new hybrid singular value thresholding with diagonal-modify algorithm based on the augmented Lagrange multiplier (ALM) method was proposed for low-rank matrix recovery, in which only part singular values were treated by a hybrid threshold operator with diagonal-update, and which allowed the algorithm to make use of simple arithmetic operation and keep the computational cost of each iteration low. The new algorithm decreased the complexity of the singular value decomposition and shortened the computing time. The convergence of the new algorithm was discussed. Finally, numerical experiments shown that the new algorithm greatly improved the solving efficiency of a matrix recovery problem and saved the calculation cost, and its effect was obviously better than that of the other algorithms mentioned in experiments.



    The low-rank matrix recovery, also known as robust PCA or sparse and low-rank matrix decomposition, means to find the lowest rank matrices based on fewer linear measurements, arise in many fields such as collaborative filtering [1,2,3], machine learning [4], picture alignment [5,6], signal processing [7], quantum state tomography [8] and more.

    The problem of a low-rank matrix recovery was first introduced in [9,10,11,12], which can be regarded as a matrix-analogue of compressed sensing [13] refers to automatically identify the corrupted elements and recover the original matrix when some elements of the matrix were seriously missing. Wright et al. [14] had proved that the low-rank matrix A can be exactly recovered from matrix D by solving the following convex optimization problem,

    minA,EA+λE1,s.t.D=A+E, (1.1)

    where A:=rk=1σk(A),σk(A) denotes the k-th largest singular value of ARn1×n2 with rank(A)=r, was called the nuclear norm of A. The sum of the absolute values of the matrix E entries was denoted as E1, and λ was a positive weighting parameter.

    As for the solution of the model (1.1), there are a lot of studies both from theoretical and algorithmic aspects. For example, the iterative threshold (IT) algorithm (see [14]) had been put forward, which used a soft threshold operator and linear Bremgan iteration to solve model (1.1). The de-capitalize iterative hard threshold (IHT) algorithm (see [15]) had been proposed based on the influence of hard threshold operator on compressed sensing. In summary, the accelerated proximal gradient (APG) method (see [16]), the singular value thresholding (SVT) algorithm (see [17]), the dual algorithm (see [18]), the augmented Lagrangian multiplier (ALM) algorithm (see [19]), and the hybrid augmented Lagrange multiplier (HALM) algorithm (see [20]) had been later presented to deal with the convex problem (1.1). Numerous details and derivations on a low-rank matrix recovery problem can be referred to the references given therein.

    This paper aims mainly at establishing a new hybrid thresholding with a diagonal-modify iterative algorithm based on the ALM algorithm for recovering a low-rank matrix. By using a diagonal-modify technique, the sequence matrices generated by the new algorithm were approximated in the true solution well, which saves significant computational cost.

    The rest of the paper was organized as follows: In Section 2, the notations used throughout this paper and related preliminaries studies were introduced. In Section 3, we presented the proposed matrix recovery algorithm in detail. The convergence analysis of the new algorithm was given in Section 4. Then, numerical experiments were shown in Section 5. Finally, we ended the paper with the concluding remarks in Section 6.

    In this position, we provide some basic notations and preliminaries that were used in our analysis. Rn1×n2 denotes the set of n1×n2 real matrices; AT is used to express the transpose of a matrix A. diag(d11,d22,,dnn) stands for a diagonal matrix with the diagonal elements d11,d22,,dnn. A1 denotes the sum of the absolute values of matrix A entries. A2 represents the spectral norm, the square root of the maximum eigenvalue of ATA, and the Frobenius norm AF=n2j=1n1i=1a2ij. The tr(A) represents the trace of A, and the standard inner product between two matrices is denoted by X,Y=tr(XTY).

    Definition 1. (Singular Value Decomposition (SVD) [21]) The singular value decomposition (SVD) of a r-rank matrix ARn1×n2 is defined by

    A=UΣrVT,Σr=diag(σ1,σ2,,σr),

    where URn1×r and VRn2×r are both column orthogonal matrices, and σ1σ2σr>0 are all positive singular values of A. By the way, A:=rk=1σk(A) denotes the nuclear norm of A.

    Definition 2. For each τ0, the soft thresholding (shrinkage) operator (see [17]) Sτ was defined by

    Sτ[σ]={στ,if |σ|τ0,if |σ|<τ,

    the hard thresholding operator (see [22]) ητ was defined by

    ητ[σ]={σ,if |σ|τ0,if |σ|<τ,

    where σR, and τ is called a thresholding.

    Definition 3. (Hybrid Singular Value Thresholding Operator [20]) Let X=UΣrVTRn1×n2 be the SVD of X mentioned above. For each τ0,z>1, the hybrid thresholding operator Hτ,z is defined by

    Hτ,z(X):=UHτ,z(Σ)VT,Hτ,z(Σ)={ητ[σi],σizτSτ[σi],σi<zτ.

    Moreover, Sτ(A):=USτ(Σ)VT and ητ(A):=Uητ(Σ)VT based on definition 2.

    In this section, a new fast algorithm was proposed after introducing a related algorithm for solving the problem (1.1).

    It is well-known that the partial augmented Lagrange function of (1.1) is

    L(A,E,Y,μ)=A+λE1+Y,DAE+μ2DAE2F,

    where A,E,YRn1×n2, μ>0 is the penalty parameter. It is reported that the algorithm of augmented Lagrange multipliers has been applied to the low-rank matrix recovery problem. It is of much better numerical behavior, and it is also of much higher accuracy. Then the augmented Lagrange multipliers algorithm is summarized in the following Algorithm 3.1.

    Algorithm 3.1. (the ALM algorithm [19])
    Step 0: Given a sampled matrix D=A+E, parameters μ0>0, ρ>1. Given also two initial matrices Y0=0,E0=0,k:=0;
    Step 1: Solve Ak=argminAL(A,Ek,Yk,μk), compute the SVD of the matrix (DEk+μ1kYk),
    [Uk,Σk,Vk]=svd(DEk+μ1kYk);
    Step 2: Set
    Ak+1=UkSμ1k(Σk)VTk.
    Solves Ek+1=argminL(Ak+1,E,Yk,μk),
    Ek+1=Sμ1k(DAk+1+μ1kYk);
    Step 3: If DAk+1Ek+1F/DF<ϵ1 and μkEk+1EkF/DF<ϵ2, stop; otherwise, go to next Step;
    Step 4 : Set Yk+1=Yk+μk(DAk+1Ek+1),
    If μkEk+1EkF/DF<ϵ2, set μk+1=ρμk; otherwise, go to Step 1.

    For convenience, [Uk,Σk,Vk]=svd() denotes the SVD of the corresponding matrix.

    Because the soft thresholding operator compressed the thresholding in a constant way and maybe lost some effective large thresholdings, while the hard thresholding operator was discontinuous, which reduced the smoothness of the matrix. To complement the advantages of the two operators, a hybrid singular value threshold operator had been designed. Based on the augmented Lagrange multiplier algorithm, the new algorithm employs the hybrid singular threshold operator with diagonal-modify H to deal with the singular value in each iteration.

    The details of the so-called hybrid singular value threshold operator: keep some large singular values unchanged, compress some middle size singular values in a constant way, and take some small singular values as 0. This operator can make up for the deficiency of a single soft or hard threshold operator, and improve the accuracy of the recovered matrix partially. Further, a diagonal-modify W(A)=AW was used to improve its recovery efficiency at the kth iteration, where the diagonal matrix Wk=diag(w11(k),w22(k),,wnn(k))Rn×n can be obtained by

    Wk=argminAAkWkF. (3.1)

    In fact, Eq (3.1) is easy to compute since it is so simple just some arithmetic operation required, without extra cost. In fact, the exact solution of (3.1) is given by

    wjj(k)=A(:,j),Ak(:,j)Ak(:,j),Ak(:,j),  j=1,,n.

    The new algorithm can be designed as follows:

    The difference with the classical ALM algorithm focuses on Steps 2 and 3. Moreover, Agorithm 3.2 includes the HALM algorithm in [20] as a special case when W=I.

    In this section, we presented briefly the convergence of Algorithm 3.2 by analyzing the properties of the sequences {Ak},{Ek}, and {Yk}. Since the D-HALM algorithm was a progression of the classical ALM algorithm, its proof is a trivial generalization. For details of proofs and techniques, one can refer to [19] and references given therein.

    Algorithm 3.2. (Hybrid augmented Lagrange multiplier algorithm with diagonal-modify, denoted by D-HALM)
    Step 0: Given a sampled matrix  D=A+E. Set parameters k>0,ρ>1,ϵ1,ϵ2, the initial matrices Y0=0,E0=0,k:=0;
    Step 1: Solve Ak=argminAL(A,Ek,Yk,μk), compute the SVD of the matrix (DEk+μ1kYk),
    [Uk,Σk,Vk]=svd(DEk+μ1kYk);
    Step 2: Set
    ˜Ak+1=UkHμ1k,z(Σk)VTk, Wk=argminA˜Ak+1WkF
    Step 3: Set
    Ak+1=Ak+1Wk.
    Solves Ek+1=argminL(Ak+1,E,Yk,μk),
    Ek+1=Hμ1k,z(DAk+1+μ1kYk);
    Step 4: If DAk+1Ek+1F/DF<ϵ1 and μkEk+1EkF/DF<ϵ2, stop, otherwise, go to next step;
    Step 5: Set Yk+1=Yk+μk(DAk+1Ek+1), if μkEk+1EkF/DF<ϵ2, set μk+1=ρμk,
    otherwise, go to Step 1.

    Lemma 4.1. (see [17]) Let ARn1×n2 be a matrix and UΣVT be its SVD. Then the subgradients set of the nuclear norm of A is given by

    Ak={UVT+W:WRn1×n2,UTW=0,WV=0,W21}.

    Lemma 4.2. (see [19])

    Ek+1E2F+μ2kYk+1Y2F=EkE2F+μ2kYkY2FEk+1Ek2Fμ2kYk+1Yk2F2μ1k(Yk+1Yk,Ek+1Ek+Ak+1A,ˉYk+1Y+Ek+1E,Yk+1Y),

    where (A,E) and Y are the optimal solutions to the problem (1) and its dual problem, respectively.

    Lemma 4.3. Let ¯Yk=Yk1+μ(DAkEk1), then the sequences {¯Yk} and {Yk} are bounded.

    Proof. Let ¯Ak+1= UkDμ1k(Σ)VTk, we have

    AL(Ak+1,Ek,Yk,μk)=Ak+1Ykμk(DAk+1Ek)=¯Ak+1Ykμk(D¯Ak+1Ek)+μk(Ak+1¯Ak+1)=μk(Ak+1¯Ak+1)=τμk¯Uk¯VTk=¯Uk¯VTk.

    Thus,

    ¯Yk+1=Ak+1AL(Ak+1,Ek,Yk,μk).

    Now we can obtain the following result:

    ¯Yk+12Ak+12AL(Ak+1,Ek,Yk,μk)22.

    From Ek+1, as shown in Algorithm 3.2, we can obtain

    EL(Ak+1,Ek+1,Yk,μk)=(λEk+11)Ykμk(DAk+1Ek+1)=0.

    Hence,

    Yk+1(λEk+11).

    The boundness of the sequences {¯Yk} and {Yk} is obtained.

    Lemma 4.4. (see [19]) The subgradient of a convex function is a monotone operator. Namely,

    x1x2,y1y20,yif(xi), i=1,2

    under f is a convex function.

    Lemma 4.5. (see [19]) The terms of the series

    k=1μ1k(Yk+1Yk,Ek+1Ek+Ak+1A,ˉYk+1Y+Ek+1E,Yk+1Y)

    is nonnegative, and the series is convergent provided that μk is nondecreasing.

    In Algorithm 3.2, we do not have to solve the sub-problem

    (Ak,Ek)=argminA,EL(A,Ek,Yk,μk)

    exactly. Rather, updating them once when solving this sub-problem is sufficient for Ak and Ek to convergence the local optimal solution of the problem (1). And this leads to an inexact ALM with diagonal-modify, similar to the IALM introduced in [19]. After the analysis on Ak,Ek, and Yk via these Lemmas above, the validity and optimality of Algorithm 3.2 are guaranteed by the following theorem (see [19]).

    Theorem 4.1. Suppose that μk is nondecreasing and the series i=1μ1k diverges, then the sequence {(Ak,Ek)} converges to the optimal solution (A,E) of problem (1.1).

    Proof. Because of YA, Y(λE1), Yk+1(λEk+11), and ¯Yk+1=Ak+1AL(Ak+1,Ek,Yk,μk), we have

    Yk+1Y,Ek+1E0,
    Ek+1Ek,Yk+1Yk0.

    By analysis, we obtain

    Ak+1A,¯Yk+1Y=Ak+1A,Ak+1A+¯Uk¯VTk=Ak+1A,Ak+1A+Ak+1A,¯Uk¯VTk.

    Obviously, we can obtain

    Ak+1A,Ak+1A0,
    Ak+1A,¯Uk¯VTk0.

    Thus, EkE2+μ2kYkY2 is non-increasing, as shown in [19]. From the Theorem 2 in [19], it holds

    limkAk=A, limkEk=E,

    we obtain that (A,E) is the solution of the low-rank matrix recovery problem (1.1), which completes the proof.

    In this section, some random data and video sequences were used to demonstrate the proposed algorithm was feasible and effective to solve a low-rank matrix recovery problem (1.1). All the experiments are performed under Windows 11 and MATLAB R2019a running on a DELL laptop.

    Some original results of four algorithms (APG, ALM, HALM, and D-HALM) were provided for the n×n matrices with different ranks for the problem (1.1). We conduct numerical experiments on the same workstation. By analyzing and comparing iteration numbers (denoted by IT), computing time in seconds (denoted by CPU(s)), and deviation (Error 1, Error 2), defined by

     Error 1=¯AAFAF,Error 2=¯EEFEF.

    In the experiments, p=mn2 represents the sampling density, where m is the number of observed entries. We denote the optimal solution by the ordered pair (A,E), and the output by (¯A,¯E). For Algorithm 3.2 (D-HALM), we choose a fixed weighting parameter λ=1n, set Y0=DJ(D)(J(D)=max(D2,λ1D) [19], and we empirically set the parameter μ0=1.25D2,τ0=0.5D2,ϵ1=1×108,ϵ2=1×107,z=1.4. By the way, the ALM, HALM, and D-HALM algorithms share the same parameters.

    Tests were conducted for n1=n2,p{0.1,0.2,0.3,0.4}, and the rank of the underlying matrix to be reconstructed, r=0.005n. The specific comparison results were shown in Tables 14, and the compared curve of the computing time for different algorithms was mapped in Figures 1 and 2.

    Table 1.  Compared results of four algorithms for p=0.1.
    Size n r(A) Algorithm IT Error 1 Error 2 CPU(s) SP
    APG 120 4.91e-7 2.03e-5 135.06 3.9
    2000 10 ALM 33 8.57e-10 1.25e-7 53.56 1.5
    HALM 33 2.01e-9 1.99e-7 44.10 1.3
    D-HALM 32 1.52e-9 1.91e-7 34.90 1
    APG 120 5.24e-7 2.63e-5 250.30 4.0
    3000 15 ALM 33 1.20e-9 1.90e-7 129.75 2.1
    HALM 32 2.09e-9 1.85e-7 96.51 1.5
    D-HALM 32 1.13e-9 1.79e-7 62.96 1
    APG 119 5.32e-7 3.76e-5 1431.28 3.7
    6000 30 ALM 33 1.15e-9 2.63e-7 608.93 1.6
    HALM 34 1.13e-9 2.39e-7 420.77 1.1
    D-HALM 31 1.11e-9 2.51e-7 385.50 1
    APG 120 6.56e-7 6.11e-5 3530.34 4.3
    10,000 50 ALM 33 1.23e-9 3.34e-7 2160.80 2.6
    HALM 33 1.32e-9 3.92e-9 951.68 1.2
    D-HALM 31 1.02e-9 2.98e-7 821.62 1
    APG 119 5.32e-7 6.19e-5 9932.01 3.0
    16,000 80 ALM 34 1.52e-9 5.68e-7 7293.78 2.2
    HALM 34 1.49e-9 5.56e-7 4404.62 1.3
    D-HALM 32 1.09e-9 4.96e-7 3363.61 1
    APG 117 6.83e-7 9.43e-5 19,800.99 5.2
    22,000 110 ALM 35 9.22e-10 4.04e-7 17,623.04 4.6
    HALM 34 8.70e-10 3.63e-7 4212.63 1.1
    D-HALM 33 8.97e-10 3.93e-7 3795.17 1
    APG 117 6.81e-7 1.10e-4 50,699.09 4.5
    30,000 150 ALM 34 9.51e-10 4.86e-7 44,212.62 3.9
    HALM 33 8.76e-10 4.49e-7 15,920.68 1.4
    D-HALM 32 6.76e-10 3.69e-7 11,382.12 1

     | Show Table
    DownLoad: CSV
    Table 2.  Compared results of four algorithms for p=0.2.
    Size n r(A) Algorithm IT Error 1 Error 2 CPU(s) SP
    APG 122 4.26e-7 1.60e-5 129.65 4.2
    2000 10 ALM 33 1.30e-9 8.41e-8 55.65 1.8
    HALM 32 1.39e-9 1.55e-7 44.15 1.4
    D-HALM 31 1.46e-9 1.21e-7 30.57 1
    APG 123 3.84e-7 1.75e-5 238.86 4.0
    3000 15 ALM 33 1.69e-9 1.74e-7 135.97 2.3
    HALM 33 1.62e-9 1.33e-7 96.55 1.6
    D-HALM 30 1.47e-9 1.58e-7 59.22 1
    APG 122 4.26e-7 2.75e-5 1469.07 5.0
    6000 30 ALM 34 9.77e-10 1.45e-7 639.15 2.3
    HALM 33 8.99e-10 1.99e-7 335.54 1.2
    D-HALM 32 1.01e-9 1.52e-7 283.67 1
    APG 120 4.46e-7 4.13e-5 3525.37 4
    10,000 50 ALM 34 1.09e-9 2.10e-7 2178.51 2.5
    HALM 34 1.14e-9 2.20e-7 1022.68 1.2
    D-HALM 31 1.04e-9 2.11e-7 872.03 1
    APG 124 3.80e-7 3.60e-5 11,914.50 4.0
    16,000 80 ALM 34 1.75e-9 3.16e-7 11,673.95 3.9
    HALM 34 1.28e-9 3.12e-7 3316.21 1.1
    D-HALM 32 1.03e-9 2.02e-7 3001.70 1
    APG 120 5.48e-7 6.88e-5 17,883.42 5.6
    22,000 110 ALM 36 1.37e-9 3.91e-7 18,003.14 5.7
    HALM 35 1.55e-9 3.12e-7 4312.69 1.4
    D-HALM 33 1.07e-9 2.41e-7 3172.79 1
    APG 120 5.50e-7 8.07e-5 56,540.07 6.1
    30,000 150 ALM 35 1.50e-9 5.00e-7 44,925.23 4.8
    HALM 35 7.43e-10 2.48e-7 15,920.68 1.7
    D-HALM 31 2.43e-10 1.08e-7 9287.39 1

     | Show Table
    DownLoad: CSV
    Table 3.  Compared results of four algorithms for p=0.3.
    Size n r(A) Algorithm IT Error 1 Error 2 CPU(s) SP
    APG 124 3.71e-7 1.26e-5 142.08 4.6
    2000 10 ALM 34 9.14e-10 5.39e-8 58.00 1.9
    HALM 34 8.22e-10 5.66e-7 42.1 1.4
    D-HALM 31 2.06e-10 3.77e-8 30.90 1
    APG 124 3.75e-7 1.56e-5 269.70 4.0
    3000 15 ALM 34 1.20e-9 9.53e-8 139.49 2.1
    HALM 34 2.11e-9 9.77e-8 90.36 1.4
    D-HALM 31 1.09e-9 6.18e-8 66.81 1
    APG 124 3.75e-7 2.22e-5 1363.39 5.1
    6000 30 ALM 34 1.41e-9 1.55e-7 648.55 2.4
    HALM 33 2.33e-9 3.65e-7 385.87 1.4
    D-HALM 33 1.07e-9 1.10e-7 269.28 1
    APG 124 4.27e-7 3.29e-5 2937.07 2.9
    10,000 50 ALM 34 1.54e-9 2.19e-7 2209.08 2.1
    HALM 34 1.32e-9 2.18e-7 1224.68 1.2
    D-HALM 31 1.02e-9 2.11e-7 1024.10 1
    APG 124 3.80e-7 3.60e-5 11,914.50 4.0
    16,000 80 ALM 34 1.75e-9 3.16e-7 11,673.95 3.9
    HALM 33 1.75e-9 3.12e-7 3316.5 1.1
    D-HALM 32 1.03e-9 2.02e-7 3001.70 1
    APG 122 4.91e-7 5.61e-5 18,187.18 4.2
    22,000 110 ALM 34 1.02e-9 2.15e-7 25,549.05 5.9
    HALM 34 1.11e-9 2.20e-7 4855.6 1.1
    D-HALM 30 1.02e-9 1.15e-7 4354.47 1
    APG 122 4.71e-7 6.53e-5 46,154.80 3.8
    30,000 150 ALM 34 1.10e-9 2.73e-7 66,690.46 5.5
    HALM 34 8.55e-10 2.31e-7 15,920.68 1.3
    D-HALM 31 5.98e-10 2.08e-7 12,095.16 1

     | Show Table
    DownLoad: CSV
    Table 4.  Compared results of four algorithms for p=0.4.
    Size n r(A) Algorithm IT Error 1 Error 2 CPU(s) SP
    APG 124 3.36e-7 1.04e-5 129.32 4.7
    2000 10 ALM 33 1.14e-9 5.29e-8 57.46 2.1
    HALM 33 1.39e-9 4.88e-7 42.1 1.6
    D-HALM 32 1.02e-9 2.81e-8 27.83 1
    APG 126 3.36e-7 1.29e-5 253.08 3.9
    3000 15 ALM 34 1.44e-9 8.37e-8 143.15 2.2
    HALM 33 1.48e-9 8.12e-8 90.40 1.4
    D-HALM 33 1.57e-9 7.58e-8 65.77 1
    APG 126 3.38e-7 1.81e-5 1593.19 7.3
    6000 30 ALM 34 1.83e-9 1.57e-7 654.36 3.0
    HALM 33 1.77e-9 1.98e-7 385.45 1.8
    D-HALM 31 1.09e-9 1.54e-7 218.02 1
    APG 124 4.21e-7 2.96e-5 3527.02 3.9
    10,000 50 ALM 34 2.04e-9 2.24e-7 2168.58 2.4
    HALM 34 2.08e-9 2.24e-7 1004.68 1.1
    D-HALM 32 2.06e-9 2.16e-7 908.74 1
    APG 126 3.38e-7 2.97e-5 9472.75 4.2
    16,000 80 ALM 35 1.24e-9 1.72e-7 10,515.76 4.7
    HALM 35 1.25e-9 1.74e-7 2636.23 1.2
    D-HALM 33 1.25e-9 1.74e-7 2236.69 1
    APG 123 4.96e-7 5.05e-5 20,905.17 4.2
    22,000 110 ALM 39 1.37e-9 2.24e-7 24,175.33 4.8
    HALM 38 1.38e-9 2.36e-7 5855.87 1.2
    D-HALM 34 1.36e-9 2.23e-7 5000.01 1
    APG 124 4.38e-7 5.32e-5 48,635.34 4.3
    30,000 150 ALM 35 1.46e-9 2.79e-7 56,912.09 5.0
    HALM 35 1.43e-9 2.63e-7 12,920.68 1.2
    D-HALM 35 1.30e-9 2.50e-7 11,302.13 1

     | Show Table
    DownLoad: CSV
    Figure 1.  Compared curve of CPU(s) time of four algorithms when n104.
    Figure 2.  Compared curve of CPU(s) time of four algorithms when n104.

    From Figures 1 and 2, it can be seen that the time of the D-HALM algorithm was always less than other algorithms. The larger the matrix scale, the more obvious the effect.

    As can be shown in Tables 14, the proposed algorithm, ALM, and HALM algorithms achieve almost the same iterations and relative errors. However, we can see the CPU of our algorithm was obviously less than that of APG, ALM, and HALM algorithms.

    By introducing the speed-up (denoted by SP) as follows,

    SP=time of the other algorithmtime of the D-HALM algorithm,

    it is noted that the maximum SP values of APG, ALM and HALM algorithms were approximately 7.3, 2.6, and 1.6 when the matrix size n10000; however, those of them were approximately 6.1, 5.9 and 1.7. The computational cost of the proposed algorithm was significantly lower than the other algorithms.

    In this subsection, we use four surveillance video sequences (video 1: Hall of a business building; video 2: Mall; video 3: Bootstrap; video 4: Fountain) with different resolutions, respectively, to compare the separated effects of three ADMM algorithms (ALM, HALM, and D-HALM). One of 200 images of four video sequences was randomly selected for testing. Figures 35 shown the results of foreground and background separation of four surveillance video sequences under the D=A+E model. The error and running time were reported in Table 5, where

    Error=A+EDFDF.
    Figure 3.  Experiment results of ALM method on videos 1-4, respectively.
    Figure 4.  Experiment results of HALM algorithm on videos 1–4, respectively.
    Figure 5.  Experiment results of D-HALM algorithm on videos 1–4, respectively.
    Table 5.  Background foreground separation experiment randomly extracts of a graph.
    Video Resolution Item ALM HALM D-HALM
    Hall 144×176 Error 8.83e-8 6.90e-8 6.42e-8
    CPU(s) 334.35 198.23 108.85
    Mall 256×320 Error 8.52e-8 8.23e-8 8.07e-8
    CPU(s) 1038.13 231.88 191.09
    Bootstrap 120×160 Error 9.17e-8 8.06e-8 7.31e-8
    CPU(s) 275.68 112.36 98.57
    Fountain 128×160 Error 8.58e-8 7.05e-8 6.45e-8
    CPU(s) 304.14 135.33 99.30

     | Show Table
    DownLoad: CSV

    From Table 5, it can be seen that three algorithms shared similar separation accuracy while the running time of D-HALM was cost-effective.

    To solve the low-rank matrix recovery problem, we have proposed a hybrid singular value thresholding algorithm with diagonal-modify (D-HALM) based on the classical ALM algorithm. In the proposed algorithm, a hybrid singular value thresholding was used and a diagonal-modify was carried out in each iteration. The iterative sequence generated by Algorithm 3.2 for solving the optimization models converges to the optimal solution, which can be guaranteed by the traditional convergence theory. In experiments, compared with APG, ALM, and HALM algorithms introduced in the past three years. The experiments based on randomly numerical simulation data and video sequence separation have shown both that the proposed algorithm in this paper takes less time and gets more precision for solving the low-rank matrix recovery problem. It was found that the D-HALM algorithm was more efficient for solving low-rank matrix recovery problems than the other algorithms. In addition, it is worth mentioning that the new algorithm can further develop to solve the tensor completion problems.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors are very much indebted to the anonymous referees for their helpful comments and suggestions, which greatly improved the original manuscript of this paper. The authors are so thankful for the support from the NSF of China (12371381), the special fund for science and technology innovation teams of Shanxi province (202204051002018), and the Shanxi higher education science and technology innovation project (2023L243).

    The authors declare that they have no conflict of interests.



    [1] A. Mongia, A. Majumdar, Matrix completion on multiple graphs: Application in collaborative filter- ing, Signal Process., 165 (2019), 144–148. https://doi.org/10.1016/j.sigpro.2019.07.002 doi: 10.1016/j.sigpro.2019.07.002
    [2] S. Daei, F. Haddadi, A. Amini, Exploitation prior information in block sprase signal, IEEE Trans. Signal Process., 99 (2018). https://doi.org/10.1109/TSP.2019.2931209 doi: 10.1109/TSP.2019.2931209
    [3] Z. Zhang, K. Zhao, H. Zha, Inducible regularization for low-rank matrix factorizations for collaborative filtering, Neurocomputing, 97 (2012), 52–62. https://doi.org/10.1016/j.neucom.2012.05.010 doi: 10.1016/j.neucom.2012.05.010
    [4] S. Li, Q. Li, Z. Zhu, G. Tang, M. B. Wakin, The global geometry of centralized and distributed low- rank matrix recovery without regularization, IEEE Signal Process. Lett., 27 (2020), 1400–1404. https://doi.org/10.1109/LSP.2020.3008876 doi: 10.1109/LSP.2020.3008876
    [5] B. J. Han, J. Y. Simg, Glass reflection removal using co-saliency based image alignment and low- rank matrix completion in gradient domain, IEEE Trans. Image Process., 27 (2018), 1–1. https://doi.org/10.1109/TIP.2018.2849880. doi: 10.1109/TIP.2018.2849880
    [6] L. I. Jie, F. Chen, Y. Wang, J. I. Fei, Y. U. Hua, Spatial spectrum estimation of incoherently dis tributed sources based on low-rank matrix recovery, IEEE Trans. Veh. Technol., 99 (2020). https://doi.org/10.1109/TVT.2020.2986783 doi: 10.1109/TVT.2020.2986783
    [7] Z. H. Zhu, Q. W. Tang, G. G. Wakin, B. Michael, Global optimality in low-rank matrix optimization, IEEE Trans. Signal Process., 66 (2018), 3614–3628. https://doi.org/10.1109/TSP.2018.2835403 doi: 10.1109/TSP.2018.2835403
    [8] M. Kech, M. Wolf, Constrained quantum tomography of semi-algebraic sets with applications to low-rank matrix recovery, Mathematics, 6 (2017), 171–195. https://doi.org/10.1093/imaiai/iaw019 doi: 10.1093/imaiai/iaw019
    [9] E. J. Candès, B. Recht, Exact low-rank matrix completion via convex optimization, in 2008 46th Annual Allerton Conference on Communication, Control, and Computing, (2009), 806–812. https://doi.org/10.1109/ALLERTON.2008.4797640
    [10] E. J. Candès, T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inf. Theory, 56 (2010), 2053–2080. https://doi.org/10.1109/TIT.2010.2044061 doi: 10.1109/TIT.2010.2044061
    [11] B. Recht, A simpler approach to matrix completion, J. Mach. Learn. Res., 12 (2009), 3413–3430. https://doi.org/10.1177/0959651810397461 doi: 10.1177/0959651810397461
    [12] B. Recht, M. Fazel, P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52 (2010), 471–501. https://doi.org/10.1137/070697835 doi: 10.1137/070697835
    [13] D. Gross, Recovering low-rank matrices from few coefficients in any basis, IEEE Trans Inf. Theory, 57 (2011), 1548–1566. https://doi.org/10.1109/TIT.2011.2104999 doi: 10.1109/TIT.2011.2104999
    [14] J. Wright, A. Ganesh, S. Rao, Y. Ma, Robust principal component analysis: exact recovery of corrupted low-rank matrices, in Neural Networks for Signal Processing X. Proceedings of the 2000 IEEE Signal Processing Society Workshop, 2009. https://doi.org/10.1109/NNSP.2000.889420
    [15] R. Meka, P. Jain, I. Dhillon, Guaranteed rank minimization via singular value projection, preprint, arXiv: 0909.5457. https://doi.org/10.48550/arXiv.0909.5457
    [16] A. Ganesh, Z. Lin, J. Wright, L. Wu, M. Chen, Y. Ma, Fast algorithms for recovering a corrupted low-rank matrix, in 2009 3rd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 2009. https://doi.org/10.1109/CAMSAP.2009.5413299.
    [17] J. F. Cai, E. J. Candès, Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20 (2010), 1956–1982. https://doi.org/10.1137/080738970 doi: 10.1137/080738970
    [18] Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen, Y. Ma, Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix, Coordinated Science Laboratory Report no. UILU-ENG-09-2214, DC-246, 2009. https://doi.org/10.1017/S0025315400020749
    [19] Z. Lin, M. Chen, Y. Ma, The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices, preprint, arXiv: 1009.5055. https://doi.org/10.48550/arXiv.1009.5055
    [20] J. Guo, R. P. Wen, Hybrid augmented lagrange multiplier algorithm for low-rank matrix com- pletion, Numer. Math.: Theory, Methods Appl., 44 (2022), 187–201.
    [21] G. H. Golub, C. F. Van Loan, Matrix Computations, 4th edition, Johns Hopkins Studies in Mathematical Sciences, 2013.
    [22] D. Goldfarb, S. Ma, Convergence of fixed-point continuation algorithms for matrix rank minimization, Found. Comput. Math., 11 (2011), 183–210. https://doi.org/10.1007/s10208-011-9084-6 doi: 10.1007/s10208-011-9084-6
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(575) PDF downloads(34) Cited by(0)

Figures and Tables

Figures(5)  /  Tables(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog