
This paper considers the minimax perturbation bounds of the low-rank matrix under Ky Fan norm. We first explore the upper bounds via the best rank-r approximation ˆAr of the observation matrix ˆA. Next, the lower bounds are established by constructing special matrix groups to show the upper bounds are tight on the low-rank matrix estimation error. In addition, we derive the rate-optimal perturbation bounds for the left and right singular subspaces under Ky Fan norm sinΘ distance. Finally, some simulations have been carried out to support our theories.
Citation: Xinyu Qi, Jinru Wang, Jiating Shao. Minimax perturbation bounds of the low-rank matrix under Ky Fan norm[J]. AIMS Mathematics, 2022, 7(5): 7595-7605. doi: 10.3934/math.2022426
[1] | Ziran Yin, Chongyang Liu, Xiaoyu Chen, Jihong Zhang, Jinlong Yuan . A comprehensive characterization of the robust isolated calmness of Ky Fan k-norm regularized convex matrix optimization problems. AIMS Mathematics, 2025, 10(3): 4955-4969. doi: 10.3934/math.2025227 |
[2] | Dejin Zhang, Shuwen Xiang, Xicai Deng, Yanlong Yang . Strongly essential set of vector Ky Fan's points problem and its applications. AIMS Mathematics, 2021, 6(4): 3160-3176. doi: 10.3934/math.2021191 |
[3] | Zhihua Wang . Stability of a mixed type additive-quadratic functional equation with a parameter in matrix intuitionistic fuzzy normed spaces. AIMS Mathematics, 2023, 8(11): 25422-25442. doi: 10.3934/math.20231297 |
[4] | Mutti-Ur Rehman, Jehad Alzabut, Javed Hussain Brohi . Computing μ-values for LTI Systems. AIMS Mathematics, 2021, 6(1): 304-313. doi: 10.3934/math.2021019 |
[5] | Haixia Chang, Chunmei Li, Longsheng Liu . Generalized low-rank approximation to the symmetric positive semidefinite matrix. AIMS Mathematics, 2025, 10(4): 8022-8035. doi: 10.3934/math.2025368 |
[6] | Cunlin Li, Wenyu Zhang, Baojun Yang, Hooi Min Yee . A multi-player game equilibrium problem based on stochastic variational inequalities. AIMS Mathematics, 2024, 9(9): 26035-26048. doi: 10.3934/math.20241271 |
[7] | Xiaoyan Xiao, Feng Zhang, Yuxin Cao, Chunwen Zhang . Some matrix inequalities related to norm and singular values. AIMS Mathematics, 2024, 9(2): 4205-4210. doi: 10.3934/math.2024207 |
[8] | Yinlan Chen, Min Zeng, Ranran Fan, Yongxin Yuan . The solutions of two classes of dual matrix equations. AIMS Mathematics, 2023, 8(10): 23016-23031. doi: 10.3934/math.20231171 |
[9] | Ruiping Wen, Wenwei Li . An accelerated alternating directional method with non-monotone technique for matrix recovery. AIMS Mathematics, 2023, 8(6): 14047-14063. doi: 10.3934/math.2023718 |
[10] | Yinlan Chen, Lina Liu . A direct method for updating piezoelectric smart structural models based on measured modal data. AIMS Mathematics, 2023, 8(10): 25262-25274. doi: 10.3934/math.20231288 |
This paper considers the minimax perturbation bounds of the low-rank matrix under Ky Fan norm. We first explore the upper bounds via the best rank-r approximation ˆAr of the observation matrix ˆA. Next, the lower bounds are established by constructing special matrix groups to show the upper bounds are tight on the low-rank matrix estimation error. In addition, we derive the rate-optimal perturbation bounds for the left and right singular subspaces under Ky Fan norm sinΘ distance. Finally, some simulations have been carried out to support our theories.
Singular value decomposition (SVD) has been widely used in statistics, machine learning, and applied mathematics. Perturbation bounds often play a critical role in the analysis of the SVD. To be more specific, let
ˆA=A+E, |
where both A and E have the same size d1×d2, and A is a signal matrix which we are interested in, while E stands for a perturbation matrix. In this paper, suppose that ˆA and A have the following singular value decompositions,
A=UΣrVT+U⊥Σr⊥VT⊥=r∑i=1σiuivTi+d1∧d2∑i=r+1σiuivTi, | (1.1) |
ˆA=ˆUˆΣrˆVT+ˆU⊥ˆΣr⊥ˆVT⊥=r∑i=1ˆσiˆuiˆvTi+d1∧d2∑i=r+1ˆσiˆuiˆvTi, | (1.2) |
where r≤rank(A),d1∧d2 stands for min{d1,d2}. The singular values σi and ˆσi are in the decreasing order. U=[u1,…,ur],ˆU=[ˆu1,…,ˆur]∈Od1,r (the set of all d1×r orthonormal columns and Od1: = Od1,d1), and V=[v1,…,vr],ˆV=[ˆv1,…,ˆvr]∈Od2,r. Unlike compressed sensing [5] to reconstruct the original signal, our goal is to estimate the underlying low-rank matrix A and its leading left and right singular matrices U,V.
The problems to estimate U,V have been widely studied in the literature [1,3,4,10,12]. Among these results, Davis and Kahan [3], Wedin [12] established the fundamental methods for matrix perturbation theory; Vu [10], Wang [11] discussed the rotations of singular vectors after random perturbation; Cai and Zhang [1] studied the rate-optimal perturbation bounds for singular subspaces; Fan et al. [4] gave an eigenvector perturbation bound and the robust covariance estimation. In addition, Luo et al. [6] considered the perturbation bound under Schatten-q norm. Till now, a few of the existing works focused on the perturbation analysis of the matrix A itself. This paper will consider the estimation of rank-r matrix A under Ky Fan norm which extends the results of Luo et al.
For a given k∈{1,2,…,d1∧d2}, the Ky Fan norm ‖M‖(k) of the matrix M∈Rd1×d2 is given by ‖M‖(k)=∑ki=1σi(M). Clearly, ‖⋅‖(k) is a unitarily invariant norm.
In this paper, we consider the estimation of rank-r matrix A (i.e., Σr⊥=0) via rank-r truncated SVD ˆAr:=ˆUˆΣrˆVT of ˆA. It is widely known that ˆAr is the best rank-r approximation of ˆA. Here and throughout, Al or (A)l denotes the best rank-l approximation of the matrix A.
Firstly, we establish the following upper bound.
Theorem 1.1. Let the observation matrix ˆA=A+E∈Rd1×d2, where A is an unknown rank-r matrix and E is the perturbation matrix. Then
‖ˆAr−A‖(k)≤3‖Er‖(k),k=1,2,⋯,d1∧d2, |
where Er denotes the best rank-r approximation of the matrix E.
Remark 1.1. According to Eckart-Young-Mirsky Theorem and rank(A)=r, we have ‖ˆAr−ˆA‖(k)≤‖A−ˆA‖(k). Therefore,
‖ˆAr−A‖(k)≤‖ˆAr−ˆA‖(k)+‖ˆA−A‖(k)≤2‖ˆA−A‖(k)=2‖E‖(k). | (1.3) |
If r≪d1∧d2, then ‖Er‖(k) can be much smaller than ‖E‖(k) for any k≫r.
Remark 1.2. If k=d1∧d2, both the Ky Fan norm and the Schatten-1 norm are equal to the nuclear norm; If k=1, both the Ky Fan norm and the Schatten-∞ norm are equal to the spectral norm. Otherwise, the two norms are not included each other. Therefore, our results can be regarded as a supplement to the existing results.
Before stating the lower bound, for any t>0, we define the class of (A,E) as
Fr(t)={(A,E):rank(A)=r,‖E‖(k)≤t}. | (1.4) |
Here A,E∈Rd1×d2 and k∈{1,2,…,d1∧d2}.
Theorem 1.2. For the low-rank perturbation model ˆA=A+E∈Rd1×d2, if r≤12(d1∧d2), then for any estimator ˜A based on the observation matrix A+E,
inf˜Asup(A,E)∈Fr(t)‖˜A−A‖(k)≥t2, |
where k∈{1,2,…,d1∧d2}.
Theorem 1.2 shows that the upper bound given in Theorem 1.1 is sharp for the rank-r truncated singular value decomposition estimator ˆAr.
The principle angle Θ(V1,V2) of the matrices V1,V2∈Od,r means the diagonal matrix
Θ(V1,V2)=diag{cos−1(σ1),cos−1(σ2),⋯,cos−1(σr)} |
with the singular values σi:=σi(VT1V2) of VT1V2 satisfying σ1≥σ2≥⋯≥σr≥0. When r=1, Θ(V1,V2) coincides with the angle of two d dimensional unit vectors. In this paper, the sinΘ distance is used to measure the difference between V1 and V2. i.e.,
‖sinΘ(V1,V2)‖(k)=‖diag{sincos−1σ1,…,sincos−1σr}‖(k)=k∑i=1(1−σ2i)1/2. |
Indeed, although ‖sinΘ(V1,V2)‖ defines a semi-metric on Od,r, it is also satisfied
‖sinΘ(V1,V2)‖(k)≤‖sinΘ(V1,V3)‖(k)+‖sinΘ(V3,V2)‖(k) | (1.5) |
and
‖sinΘ(V1,V2)‖(k)=‖VT2⊥V1‖(k) | (1.6) |
following from [7].
As a byproduct of Theorem 1.1, we can derive the perturbation bounds for the leading singular subspaces U and V under Ky Fan norm sinΘ distance. i.e.,
‖sinΘ(ˆU,U)‖(k)≤2‖Er‖(k)σr(A),‖sinΘ(ˆV,V)‖(k)≤2‖Er‖(k)σr(A). |
Furthermore, we also give the corresponding lower bounds to show the above upper bounds are sharp.
Firstly, let us introduce some lemmas in order to prove Theorem 1.1.
A function Φ:Rd→R is called a symmetric gauge function ([9]) if (1) x≠0⟹Φ(x)>0; (2) Φ(αx)=|α|Φ(x) for α∈R; (3) Φ(x+y)≤Φ(x)+Φ(y) for any x,y∈Rd, and (4) Φ(Jxπ)=Φ(x), where J is any diagonal matrix whose diagonal elements are 1 or -1, and π is any permutation with 1,…,d.
For x,y∈Rd, define the function
Ψ(y):=supΦ(x)=1⟨y,x⟩. |
It is easy to check Ψ(⋅) is also a symmetric gauge function. In general, Ψ(y) is usually called the dual symmetric gauge function of Φ(x). In particular, for a matrix A∈Rd1×d2, we can define
Φ(A):=Φ(σ1,…,σd1∧d2), |
where σ1,…,σd1∧d2 are the singular values of A, then the following lemma is Lemma 3.4 in [9].
Lemma 2.1. Let A,B∈Rd1×d2 and their singular values are σ1≥⋯≥σd1∧d2≥0, ξ1≥⋯≥ξd1∧d2≥0 respectively. Then
maxU∈Od1,V∈Od2tr(UAVBT)=d1∧d2∑i=1σiξi. | (2.1) |
According to Lemma 2.1, we introduce a dual characterization lemma.
Lemma 2.2. Let A∈Rd1×d2, there exists a symmetric gauge function Ψk(⋅) such that
‖Ar‖(k)=supΨk(X)=1,rank(X)≤rtr(XTA) | (2.2) |
for k∈{1,2,…,d1∧d2}. In special case, if rank(A)≤r, then
‖A‖(k)=supΨk(X)=1,rank(X)≤rtr(XTA). | (2.3) |
Proof. For any k∈{1,2,…,d1∧d2}, define
Φk(A)=Φk(σ1,σ2,⋯,σd1∧d2):=k∑i=1σi, |
where σ1≥σ2≥⋯≥σd1∧d2≥0 are the singular values of A. Clearly, Φk(A) is a symmetric gauge function and Φk(A)=‖A‖(k). Furthermore, denote Ψk the dual symmetric gauge function of Φk, then for any U∈Od1,V∈Od2, we have Ψk(UTXVT)=Ψk(X) and
supΨk(X)=1,rank(X)≤rtr(XTA)=supΨk(UTXVT)=1,rank(UTXVT)≤rtr(VXTUA)=supΨk(X)=1,rank(X)≤rtr(VXTUA)=supΨk(X)=1,rank(X)≤rmaxU∈Od1,V∈Od2tr(VXTUA). |
This along with Lemma 2.1 shows that
supΨk(X)=1,rank(X)≤rtr(XTA)=supΨk(X)=1,rank(X)≤rd1∧d2∑i=1σiξi=supΨk(X)=1r∑i=1σiξi=Φk(σ1,…,σr,0,…,0)=Φk(Ar)=‖Ar‖(k), |
where ξ1≥⋯≥ξd1∧d2≥0 are the singular values of X.
For any U∈Od,r, PU=UUT is the projection matrix onto the column span of U. The next technical lemma is useful in the proof of Theorem 1.1.
Lemma 2.3. Let ˆA=A+E∈Rd1×d2, rank(A)=r, and (1.2) holds. Then for any k∈{1,2,…,d1∧d2},
max{‖PˆU⊥A‖(k),‖APˆV⊥‖(k)}≤2‖Er‖(k). |
Proof. Since rank(PˆU⊥A)≤rank(A)=r, and (2.3) of Lemma 2.2 are satisfied, we have
‖PˆU⊥A‖(k)=supΨk(X)=1,rank(X)≤rtr[XT(PˆU⊥A)]=supΨk(X)=1,rank(X)≤rtr[XT(PˆU⊥ˆA−PˆU⊥E)]≤supΨk(X)=1,rank(X)≤rtr[XT(PˆU⊥ˆA)]+supΨk(X)=1,rank(X)≤rtr[XT(PˆU⊥E)]. |
According to Lemma 2.2 and (2.2),
‖PˆU⊥A‖(k)≤‖(PˆU⊥ˆA)r‖(k)+‖(PˆU⊥E)r‖(k). | (2.4) |
In addition, ‖(PˆU⊥ˆA)r‖(k)=‖(ˆA−ˆAr)r‖(k) due to PˆUˆA=ˆAr. On the other hand, based on Theorem 2 in [8] and the fact that the norm ‖(⋅)r‖(k) is unitarily invariant, we have
‖(A−Ar)l‖(k)=infM∈Rd1×d2,rank(M)≤r‖(A−M)l‖(k). |
Therefore,
‖(PˆU⊥ˆA)r‖(k)=infrank(M)≤r‖(ˆA−M)r‖(k)≤‖(ˆA−PUˆA)r‖(k)=‖(PU⊥E)r‖(k). |
For two matrices B,C∈Rd1×d2, it is known that
σi+j−1(BCT)≤σi(B)⋅σj(C). | (2.5) |
Thus, σi(PU⊥E)≤σ1(PU⊥)σi(E)=σi(E) and σi(PˆU⊥E)≤σi(E). Hence, by (2.4),
‖PˆU⊥A‖(k)≤2‖Er‖(k). |
Similarly, ‖APˆV⊥‖(k)≤2‖Er‖(k). This completes the proof of Lemma 2.3.
Now, we are in the position to prove Theorem 1.1.
Proof. By (1.2), we know that ˆU is composed of the first r left singular vectors of ˆA. Thus, ˆAr=PˆUˆA. For any k∈{1,2,…,d1∧d2},
‖ˆAr−A‖(k)=‖PˆUˆA−(PˆU+PˆU⊥)A‖(k)=‖PˆUE−PˆU⊥A‖(k)≤‖PˆUE‖(k)+‖PˆU⊥A‖(k). |
This with (2.5) and Lemma 2.3 derives
‖ˆAr−A‖(k)≤3‖Er‖(k). |
The proof of Theorem 1.1 is complete.
Proof. First, for any k≤r, define Ai,Ei∈Rd1×d2(i=1,2) with
A1=(tkIr0000r×r0000d1−2r,d2−2r),E1=(0r×r000tkIr0000d1−2r,d2−2r,)A2=(0r×r000tkIr0000d1−2r,d2−2r,),E2=(tkIr0000r×r0000d1−2r,d2−2r), |
then we have A1+E1=A2+E2=ˆA and rank(A1)=rank(A2)=r. Where rank(A1)=rank(A2)=r and ‖(E1)r‖(k)=‖(E2)r‖(k)=kkt=t. Therefore, (A1,E1),(A2,E2)∈Fr(t).
For any estimator ˜A of A, one derives
inf˜Asup(A,E)∈Fr(t)‖˜A−A‖(k)≥inf˜A(max{‖˜A−A1‖(k),‖˜A−A2‖(k)})≥inf˜A12(‖˜A−A1‖(k)+‖˜A−A2‖(k))≥inf˜A12‖A1−A2‖(k)=t2. | (2.6) |
Next to show Theorem 1.2 is established for k>r. One takes
A1=(trIr0000r×r0000d1−2r,d2−2r),E1=(0r×r000trIr0000d1−2r,d2−2r);A2=(0r×r000trIr0000d1−2r,d2−2r,),E2=(trIr0000r×r0000d1−2r,d2−2r). |
Then A1+E1=A2+E2=ˆA, rank(A1)=rank(A2)=r and ‖(E1)r‖(k)=‖(E2)r‖(k)=t. Therefore, (A1,E1),(A2,E2)∈Fr(t). We can use similar processes to prove (2.6). i.e.,
inf˜Asup(A,E)∈Fr(t)‖˜A−A‖(k)≥t2. |
Theorem 1.2 is finished.
As a byproduct of the perturbation theory, this paper derives sinΘ perturbation bounds of the left and right subspaces U,V under Ky Fan norm.
Theorem 3.1. Let ˆA=A+E∈Rd1×d2, rank(A)=r. If the singular value decompositions(1.1) and (1.2) hold, then
‖sinΘ(ˆU,U)‖(k)≤2‖Er‖(k)σr(A),‖sinΘ(ˆV,V)‖(k)≤2‖Er‖(k)σr(A) |
for any k∈{1,2,…,d1∧d2}.
Proof. By Theorem 3.9 (II) in [9], one knows ‖BCT‖(k)≥‖B‖(k)σd1∧d2(C) for any two matrices B,C∈Rd1×d2. This with (1.6) shows
‖sinΘ(ˆU,U)‖(k)=‖ˆUT⊥U‖(k)≤‖ˆUT⊥UUTA‖(k)σr(UTA). |
According to (1.1) and rank(A)=r, one has UUTA=A and σr(UTA)=σr(A). Thus
‖sinΘ(ˆU,U)‖(k)≤‖ˆUT⊥A‖(k)σr(A)≤2‖Er‖(k)σr(A) |
thanks to Lemma 2.3. Similarly, one also can get ‖sinΘ(ˆV,V)‖(k)≤2‖Er‖(k)σr(A). We have concluded the proof of Theorem 3.1.
Theorem 3.2. For k∈{1,2,…,d1∧d2}, define the following class
Fr(α,β)={(A,E):rank(A)=r,σr(A)≥α,‖E‖(k)≤β}. |
If r≤12(d1∧d2) and α(k∧r)≥β, then for any estimators ˜U and ˜V based on the observation matrix A+E, we have
inf˜Usup(A,E)∈Fr(α,β)‖sinΘ(˜U,U)‖(k)≥12√10βα, | (3.1) |
inf˜Vsup(A,E)∈Fr(α,β)‖sinΘ(˜V,V)‖(k)≥12√10βα. | (3.2) |
Proof. We only need to show (3.2) since the statement (3.1) can be gotten by similar process. First, we introduce the following singular value decomposition,
(αβk∧r00)=(u11u12u21u22)⋅(σ1000)⋅(v11v12v21v22)T=(u11u21)σ1(v11v21)T, |
then by Lemma 3 in [2] and α(k∧r)≥β, we know
|v21|≥1√10(k∧r)βα. | (3.3) |
Second, based on the above matrix, the following matrices are constructed.
A1=(σ1u11v11Irσ1u11v21Ir0σ1u21v11Irσ1u21v21Ir0000d1−2r,d2−2r),E1=0d1,d2;A2=(αIr00000000d1−2r,d2−2r,),E2=(0βk∧rIr0000000d1−2r,d2−2r). |
Obviously, rank(A1)=rank(A2)=r and
ˆA=A1+E1=A2+E2=(αIrβk∧rIr0000000d1−2r,d2−2r). |
On the other hand, It is easy to check σr(A1)=σ1(A1)≥α,‖(E1)r‖(k)=0≤β and σr(A2)=α,‖(E1)r‖(k)=k∧rk∧rβ≤β. Hence, (A1,E1),(A2,E2)∈Fr(α,β). Let V1,V2 are the leading r singular vector of A1,A2 respectively, then
V1=(v11Irv21Ir0d2−2r),V2=(Ir0r0d2−2r) |
follow from the structure of A1,A2, Therefore, for any estimator ˜V of the leading r right singular space, we have
inf˜Vsup(A,E)∈Fr(α,β)‖sinΘ(˜V,V)‖(k)≥inf˜Vmax{‖sinΘ(˜V,V1)‖(k),‖sinΘ(˜V,V2)‖(k)}≥12(‖sinΘ(˜V,V1)‖(k)+‖sinΘ(˜V,V2)‖(k))(1.5)≥12‖sinΘ(V1,V2)‖(k)(1.6)=12‖v21Ir‖(k)=12(k∧r)|v21|(3.3)≥12√10βα. |
The proof of Theorem 3.2 is finished.
Remark 3.1. In Theorem 3.2, the assumption α(k∧r)≥β is necessary to obtain a consistent estimator. In fact, if α(k∧r)<β, there is no stable algorithm to recover either U or V in the sense that there exists uniform constant 12√2 such that
inf˜Usup(A,E)∈Fr(α,β)‖sinΘ(˜U,U)‖(k)≥12√2,inf˜Vsup(A,E)∈Fr(α,β)‖sinΘ(˜V,V)‖(k)≥12√2. |
Proof. Let
(αβk∧r00)=(u11u21)σ1(v11v21)T, |
then by Lemma 3 in [2] and α(k∧r)<β, we know |v21|≥1√2. Therefore, based on the similar discussion of the proof of Theorem 3.2, Remark 3.1 is established.
Remark 3.2. By Theorem 3.2, we can know that the rates given in Theorem 3.1 are optimal, but the corresponding lower bounds for the singular subspaces are not given in Luo et al. [6].
In this section, we provide some numerical studies to support our theoretical results. Throughout the simulation studies, we consider the nuclear norm ‖⋅‖∗ (the sum of all singular values) as the error metric. i.e., k=d1∧d2. Without loss of generality, we assume d1=d2:=d. In each setting, we randomly generate the perturbation E=uvT+Z∈Rd×d, where u,v∈Rd are randomly generated unit vectors and Z has independent identically distributed N(0,σ) entries. On the other hand, we generate low-rank matrix A=UΣrVT by a special structure. Here U,V∈Rd×r are independently drawn from Od,r uniformly at random; Σr is a r×r diagonal matrix with singular values decaying polynomially as (Σr)ii=10i,1≤i≤r. Each simulation setting is repeated for 100 times and the average values are reported. The Figure 1 is the result of numerical studies.
We set d∈{100,200},r∈{3,6,9,12,15},σ=0.004. The results of the upper bounds in Theorem 1.1, (1.3) and the true value of ‖ˆAr−A‖∗ are given in Figure 1. It shows that the upper bound in Theorem 1.1 is tighter than the upper bound in (1.3) in all setting. Furthermore, the upper bound of Theorem 1.1 remains steady while the upper bound of (1.3) significantly increases when d increases form 100 to 200.
In this paper, we give a sharp upper bound for rank-r matrix A under Ky Fan norm, and show that it is optimal by establishing the corresponding lower bound. As a byproduct, we provide the perturbation bounds for the singular subspaces under Ky Fan norm sinΘ distance. Furthermore, we give the corresponding lower bound to show its optimality. Finally, we provide numerical studies to support our theoretical results.
As a unitarily invariant norm, Ky Fan norm which is different from Schatten-q norm is also an important matrix norm. So it makes sense to study the perturbation bound for the low-rank matrix. It is worth mentioning that the approach of proving Lemma 2.3 can be generalized any unitarily invariant norm. Therefore, it can be used to study other perturbation theory in future.
The authors would like to express their gratitude to the editor and anonymous referees for their constructive and valuable suggestions which improved the paper. This work is supported by the National Natural Science Foundation of China (No. 11771030 and 12171016).
The authors declare that they have no conflicts of interest.
[1] |
T. T. Cai, A. R. Zhang, Rate-optimal perturbation bounds for singular subspaces with applications to high-dimensional statistics, Ann. Statist., 46 (2018), 60–89. https://doi.org/10.1214/17-AOS1541 doi: 10.1214/17-AOS1541
![]() |
[2] | T. T. Cai, A. R. Zhang, Supplement to "Rate-optimal perturbation bounds for singular subspaces with applications to high-dimensional statistics", 2018. https://doi.org/10.1214/17-AOS1541SUPP |
[3] |
C. Davis, W. M. Kahan, The rotation of eigenvectors by a perturbation, SIAM J. Numer. Anal., 7 (1970), 1–46. https://doi.org/10.1137/0707001 doi: 10.1137/0707001
![]() |
[4] | J. Fan, W. Wang, Y. Zhong, An l∞ eigenvector perturbation bound and its applicabtion to robust covariance estimation, J. Mach. Learn. Res., 18 (2018), 1–42. |
[5] |
J. W. Huang, J. J. Wang, F. Zhang, H. L. Wang, W. D. Wang, Perturbation analysis of low-rank matrix stable recovery, Int. J. Wavelets Multi., 19 (2021), 2050091. https://doi.org/10.1142/S0219691320500915 doi: 10.1142/S0219691320500915
![]() |
[6] |
Y. T. Luo, R. G. Han, A. R. Zhang, A Schatten-q low-rank matrix perturbation analysis via perturbation projection error bound, Linear Algebra Appl., 630 (2021), 225–240. https://doi.org/10.1016/j.laa.2021.08.005 doi: 10.1016/j.laa.2021.08.005
![]() |
[7] |
Y. M. Liu, C. G. Ren, An optimal perturbation bound, Math. Method. Appl. Sci., 42 (2019), 3791–3798. https://doi.org/10.1002/mma.5612 doi: 10.1002/mma.5612
![]() |
[8] |
L. Mirsky, Symmetric gauge functions and unitarily invariant norms, Q. J. Math., 11 (1960), 50–59. https://doi.org/10.1093/qmath/11.1.50 doi: 10.1093/qmath/11.1.50
![]() |
[9] | G. W. Stewart, J. G. Sun, Matrix perturbation theory, New York: Academic Press, 1990. |
[10] |
V. Vu, Singular vectors under random perturbation, Random Struct. Algor., 39 (2011), 526–538. https://doi.org/10.1002/rsa.20367 doi: 10.1002/rsa.20367
![]() |
[11] |
R. R. Wang, Singular vector perturbation under Gaussian noise, SIAM J. Matrix Anal. Appl., 36 (2015), 158–177. https://doi.org/10.1137/130938177 doi: 10.1137/130938177
![]() |
[12] |
P. A. Wedin, Perturbation bounds in connection with singular value decomposition, BIT, 12 (1972), 99–111. https://doi.org/10.1007/BF01932678 doi: 10.1007/BF01932678
![]() |