Research article

Minimax perturbation bounds of the low-rank matrix under Ky Fan norm

  • Received: 21 September 2021 Revised: 10 February 2022 Accepted: 11 February 2022 Published: 15 February 2022
  • MSC : 15A42, 65F55

  • This paper considers the minimax perturbation bounds of the low-rank matrix under Ky Fan norm. We first explore the upper bounds via the best rank-r approximation ˆAr of the observation matrix ˆA. Next, the lower bounds are established by constructing special matrix groups to show the upper bounds are tight on the low-rank matrix estimation error. In addition, we derive the rate-optimal perturbation bounds for the left and right singular subspaces under Ky Fan norm sinΘ distance. Finally, some simulations have been carried out to support our theories.

    Citation: Xinyu Qi, Jinru Wang, Jiating Shao. Minimax perturbation bounds of the low-rank matrix under Ky Fan norm[J]. AIMS Mathematics, 2022, 7(5): 7595-7605. doi: 10.3934/math.2022426

    Related Papers:

    [1] Ziran Yin, Chongyang Liu, Xiaoyu Chen, Jihong Zhang, Jinlong Yuan . A comprehensive characterization of the robust isolated calmness of Ky Fan k-norm regularized convex matrix optimization problems. AIMS Mathematics, 2025, 10(3): 4955-4969. doi: 10.3934/math.2025227
    [2] Dejin Zhang, Shuwen Xiang, Xicai Deng, Yanlong Yang . Strongly essential set of vector Ky Fan's points problem and its applications. AIMS Mathematics, 2021, 6(4): 3160-3176. doi: 10.3934/math.2021191
    [3] Zhihua Wang . Stability of a mixed type additive-quadratic functional equation with a parameter in matrix intuitionistic fuzzy normed spaces. AIMS Mathematics, 2023, 8(11): 25422-25442. doi: 10.3934/math.20231297
    [4] Mutti-Ur Rehman, Jehad Alzabut, Javed Hussain Brohi . Computing μ-values for LTI Systems. AIMS Mathematics, 2021, 6(1): 304-313. doi: 10.3934/math.2021019
    [5] Haixia Chang, Chunmei Li, Longsheng Liu . Generalized low-rank approximation to the symmetric positive semidefinite matrix. AIMS Mathematics, 2025, 10(4): 8022-8035. doi: 10.3934/math.2025368
    [6] Cunlin Li, Wenyu Zhang, Baojun Yang, Hooi Min Yee . A multi-player game equilibrium problem based on stochastic variational inequalities. AIMS Mathematics, 2024, 9(9): 26035-26048. doi: 10.3934/math.20241271
    [7] Xiaoyan Xiao, Feng Zhang, Yuxin Cao, Chunwen Zhang . Some matrix inequalities related to norm and singular values. AIMS Mathematics, 2024, 9(2): 4205-4210. doi: 10.3934/math.2024207
    [8] Yinlan Chen, Min Zeng, Ranran Fan, Yongxin Yuan . The solutions of two classes of dual matrix equations. AIMS Mathematics, 2023, 8(10): 23016-23031. doi: 10.3934/math.20231171
    [9] Ruiping Wen, Wenwei Li . An accelerated alternating directional method with non-monotone technique for matrix recovery. AIMS Mathematics, 2023, 8(6): 14047-14063. doi: 10.3934/math.2023718
    [10] Yinlan Chen, Lina Liu . A direct method for updating piezoelectric smart structural models based on measured modal data. AIMS Mathematics, 2023, 8(10): 25262-25274. doi: 10.3934/math.20231288
  • This paper considers the minimax perturbation bounds of the low-rank matrix under Ky Fan norm. We first explore the upper bounds via the best rank-r approximation ˆAr of the observation matrix ˆA. Next, the lower bounds are established by constructing special matrix groups to show the upper bounds are tight on the low-rank matrix estimation error. In addition, we derive the rate-optimal perturbation bounds for the left and right singular subspaces under Ky Fan norm sinΘ distance. Finally, some simulations have been carried out to support our theories.



    Singular value decomposition (SVD) has been widely used in statistics, machine learning, and applied mathematics. Perturbation bounds often play a critical role in the analysis of the SVD. To be more specific, let

    ˆA=A+E,

    where both A and E have the same size d1×d2, and A is a signal matrix which we are interested in, while E stands for a perturbation matrix. In this paper, suppose that ˆA and A have the following singular value decompositions,

    A=UΣrVT+UΣrVT=ri=1σiuivTi+d1d2i=r+1σiuivTi, (1.1)
    ˆA=ˆUˆΣrˆVT+ˆUˆΣrˆVT=ri=1ˆσiˆuiˆvTi+d1d2i=r+1ˆσiˆuiˆvTi, (1.2)

    where rrank(A),d1d2 stands for min{d1,d2}. The singular values σi and ˆσi are in the decreasing order. U=[u1,,ur],ˆU=[ˆu1,,ˆur]Od1,r (the set of all d1×r orthonormal columns and Od1: = Od1,d1), and V=[v1,,vr],ˆV=[ˆv1,,ˆvr]Od2,r. Unlike compressed sensing [5] to reconstruct the original signal, our goal is to estimate the underlying low-rank matrix A and its leading left and right singular matrices U,V.

    The problems to estimate U,V have been widely studied in the literature [1,3,4,10,12]. Among these results, Davis and Kahan [3], Wedin [12] established the fundamental methods for matrix perturbation theory; Vu [10], Wang [11] discussed the rotations of singular vectors after random perturbation; Cai and Zhang [1] studied the rate-optimal perturbation bounds for singular subspaces; Fan et al. [4] gave an eigenvector perturbation bound and the robust covariance estimation. In addition, Luo et al. [6] considered the perturbation bound under Schatten-q norm. Till now, a few of the existing works focused on the perturbation analysis of the matrix A itself. This paper will consider the estimation of rank-r matrix A under Ky Fan norm which extends the results of Luo et al.

    For a given k{1,2,,d1d2}, the Ky Fan norm M(k) of the matrix MRd1×d2 is given by M(k)=ki=1σi(M). Clearly, (k) is a unitarily invariant norm.

    In this paper, we consider the estimation of rank-r matrix A (i.e., Σr=0) via rank-r truncated SVD ˆAr:=ˆUˆΣrˆVT of ˆA. It is widely known that ˆAr is the best rank-r approximation of ˆA. Here and throughout, Al or (A)l denotes the best rank-l approximation of the matrix A.

    Firstly, we establish the following upper bound.

    Theorem 1.1. Let the observation matrix ˆA=A+ERd1×d2, where A is an unknown rank-r matrix and E is the perturbation matrix. Then

    ˆArA(k)3Er(k),k=1,2,,d1d2,

    where Er denotes the best rank-r approximation of the matrix E.

    Remark 1.1. According to Eckart-Young-Mirsky Theorem and rank(A)=r, we have ˆArˆA(k)AˆA(k). Therefore,

    ˆArA(k)ˆArˆA(k)+ˆAA(k)2ˆAA(k)=2E(k). (1.3)

    If rd1d2, then Er(k) can be much smaller than E(k) for any kr.

    Remark 1.2. If k=d1d2, both the Ky Fan norm and the Schatten-1 norm are equal to the nuclear norm; If k=1, both the Ky Fan norm and the Schatten- norm are equal to the spectral norm. Otherwise, the two norms are not included each other. Therefore, our results can be regarded as a supplement to the existing results.

    Before stating the lower bound, for any t>0, we define the class of (A,E) as

    Fr(t)={(A,E):rank(A)=r,E(k)t}. (1.4)

    Here A,ERd1×d2 and k{1,2,,d1d2}.

    Theorem 1.2. For the low-rank perturbation model ˆA=A+ERd1×d2, if r12(d1d2), then for any estimator ˜A based on the observation matrix A+E,

    inf˜Asup(A,E)Fr(t)˜AA(k)t2,

    where k{1,2,,d1d2}.

    Theorem 1.2 shows that the upper bound given in Theorem 1.1 is sharp for the rank-r truncated singular value decomposition estimator ˆAr.

    The principle angle Θ(V1,V2) of the matrices V1,V2Od,r means the diagonal matrix

    Θ(V1,V2)=diag{cos1(σ1),cos1(σ2),,cos1(σr)}

    with the singular values σi:=σi(VT1V2) of VT1V2 satisfying σ1σ2σr0. When r=1, Θ(V1,V2) coincides with the angle of two d dimensional unit vectors. In this paper, the sinΘ distance is used to measure the difference between V1 and V2. i.e.,

    sinΘ(V1,V2)(k)=diag{sincos1σ1,,sincos1σr}(k)=ki=1(1σ2i)1/2.

    Indeed, although sinΘ(V1,V2) defines a semi-metric on Od,r, it is also satisfied

    sinΘ(V1,V2)(k)sinΘ(V1,V3)(k)+sinΘ(V3,V2)(k) (1.5)

    and

    sinΘ(V1,V2)(k)=VT2V1(k) (1.6)

    following from [7].

    As a byproduct of Theorem 1.1, we can derive the perturbation bounds for the leading singular subspaces U and V under Ky Fan norm sinΘ distance. i.e.,

    sinΘ(ˆU,U)(k)2Er(k)σr(A),sinΘ(ˆV,V)(k)2Er(k)σr(A).

    Furthermore, we also give the corresponding lower bounds to show the above upper bounds are sharp.

    Firstly, let us introduce some lemmas in order to prove Theorem 1.1.

    A function Φ:RdR is called a symmetric gauge function ([9]) if (1) x0Φ(x)>0; (2) Φ(αx)=|α|Φ(x) for αR; (3) Φ(x+y)Φ(x)+Φ(y) for any x,yRd, and (4) Φ(Jxπ)=Φ(x), where J is any diagonal matrix whose diagonal elements are 1 or -1, and π is any permutation with 1,,d.

    For x,yRd, define the function

    Ψ(y):=supΦ(x)=1y,x.

    It is easy to check Ψ() is also a symmetric gauge function. In general, Ψ(y) is usually called the dual symmetric gauge function of Φ(x). In particular, for a matrix ARd1×d2, we can define

    Φ(A):=Φ(σ1,,σd1d2),

    where σ1,,σd1d2 are the singular values of A, then the following lemma is Lemma 3.4 in [9].

    Lemma 2.1. Let A,BRd1×d2 and their singular values are σ1σd1d20, ξ1ξd1d20 respectively. Then

    maxUOd1,VOd2tr(UAVBT)=d1d2i=1σiξi. (2.1)

    According to Lemma 2.1, we introduce a dual characterization lemma.

    Lemma 2.2. Let ARd1×d2, there exists a symmetric gauge function Ψk() such that

    Ar(k)=supΨk(X)=1,rank(X)rtr(XTA) (2.2)

    for k{1,2,,d1d2}. In special case, if rank(A)r, then

    A(k)=supΨk(X)=1,rank(X)rtr(XTA). (2.3)

    Proof. For any k{1,2,,d1d2}, define

    Φk(A)=Φk(σ1,σ2,,σd1d2):=ki=1σi,

    where σ1σ2σd1d20 are the singular values of A. Clearly, Φk(A) is a symmetric gauge function and Φk(A)=A(k). Furthermore, denote Ψk the dual symmetric gauge function of Φk, then for any UOd1,VOd2, we have Ψk(UTXVT)=Ψk(X) and

    supΨk(X)=1,rank(X)rtr(XTA)=supΨk(UTXVT)=1,rank(UTXVT)rtr(VXTUA)=supΨk(X)=1,rank(X)rtr(VXTUA)=supΨk(X)=1,rank(X)rmaxUOd1,VOd2tr(VXTUA).

    This along with Lemma 2.1 shows that

    supΨk(X)=1,rank(X)rtr(XTA)=supΨk(X)=1,rank(X)rd1d2i=1σiξi=supΨk(X)=1ri=1σiξi=Φk(σ1,,σr,0,,0)=Φk(Ar)=Ar(k),

    where ξ1ξd1d20 are the singular values of X.

    For any UOd,r, PU=UUT is the projection matrix onto the column span of U. The next technical lemma is useful in the proof of Theorem 1.1.

    Lemma 2.3. Let ˆA=A+ERd1×d2, rank(A)=r, and (1.2) holds. Then for any k{1,2,,d1d2},

    max{PˆUA(k),APˆV(k)}2Er(k).

    Proof. Since rank(PˆUA)rank(A)=r, and (2.3) of Lemma 2.2 are satisfied, we have

    PˆUA(k)=supΨk(X)=1,rank(X)rtr[XT(PˆUA)]=supΨk(X)=1,rank(X)rtr[XT(PˆUˆAPˆUE)]supΨk(X)=1,rank(X)rtr[XT(PˆUˆA)]+supΨk(X)=1,rank(X)rtr[XT(PˆUE)].

    According to Lemma 2.2 and (2.2),

    PˆUA(k)(PˆUˆA)r(k)+(PˆUE)r(k). (2.4)

    In addition, (PˆUˆA)r(k)=(ˆAˆAr)r(k) due to PˆUˆA=ˆAr. On the other hand, based on Theorem 2 in [8] and the fact that the norm ()r(k) is unitarily invariant, we have

    (AAr)l(k)=infMRd1×d2,rank(M)r(AM)l(k).

    Therefore,

    (PˆUˆA)r(k)=infrank(M)r(ˆAM)r(k)(ˆAPUˆA)r(k)=(PUE)r(k).

    For two matrices B,CRd1×d2, it is known that

    σi+j1(BCT)σi(B)σj(C). (2.5)

    Thus, σi(PUE)σ1(PU)σi(E)=σi(E) and σi(PˆUE)σi(E). Hence, by (2.4),

    PˆUA(k)2Er(k).

    Similarly, APˆV(k)2Er(k). This completes the proof of Lemma 2.3.

    Now, we are in the position to prove Theorem 1.1.

    Proof. By (1.2), we know that ˆU is composed of the first r left singular vectors of ˆA. Thus, ˆAr=PˆUˆA. For any k{1,2,,d1d2},

    ˆArA(k)=PˆUˆA(PˆU+PˆU)A(k)=PˆUEPˆUA(k)PˆUE(k)+PˆUA(k).

    This with (2.5) and Lemma 2.3 derives

    ˆArA(k)3Er(k).

    The proof of Theorem 1.1 is complete.

    Proof. First, for any kr, define Ai,EiRd1×d2(i=1,2) with

    A1=(tkIr0000r×r0000d12r,d22r),E1=(0r×r000tkIr0000d12r,d22r,)A2=(0r×r000tkIr0000d12r,d22r,),E2=(tkIr0000r×r0000d12r,d22r),

    then we have A1+E1=A2+E2=ˆA and rank(A1)=rank(A2)=r. Where rank(A1)=rank(A2)=r and (E1)r(k)=(E2)r(k)=kkt=t. Therefore, (A1,E1),(A2,E2)Fr(t).

    For any estimator ˜A of A, one derives

    inf˜Asup(A,E)Fr(t)˜AA(k)inf˜A(max{˜AA1(k),˜AA2(k)})inf˜A12(˜AA1(k)+˜AA2(k))inf˜A12A1A2(k)=t2. (2.6)

    Next to show Theorem 1.2 is established for k>r. One takes

    A1=(trIr0000r×r0000d12r,d22r),E1=(0r×r000trIr0000d12r,d22r);A2=(0r×r000trIr0000d12r,d22r,),E2=(trIr0000r×r0000d12r,d22r).

    Then A1+E1=A2+E2=ˆA, rank(A1)=rank(A2)=r and (E1)r(k)=(E2)r(k)=t. Therefore, (A1,E1),(A2,E2)Fr(t). We can use similar processes to prove (2.6). i.e.,

    inf˜Asup(A,E)Fr(t)˜AA(k)t2.

    Theorem 1.2 is finished.

    As a byproduct of the perturbation theory, this paper derives sinΘ perturbation bounds of the left and right subspaces U,V under Ky Fan norm.

    Theorem 3.1. Let ˆA=A+ERd1×d2, rank(A)=r. If the singular value decompositions(1.1) and (1.2) hold, then

    sinΘ(ˆU,U)(k)2Er(k)σr(A),sinΘ(ˆV,V)(k)2Er(k)σr(A)

    for any k{1,2,,d1d2}.

    Proof. By Theorem 3.9 (II) in [9], one knows BCT(k)B(k)σd1d2(C) for any two matrices B,CRd1×d2. This with (1.6) shows

    sinΘ(ˆU,U)(k)=ˆUTU(k)ˆUTUUTA(k)σr(UTA).

    According to (1.1) and rank(A)=r, one has UUTA=A and σr(UTA)=σr(A). Thus

    sinΘ(ˆU,U)(k)ˆUTA(k)σr(A)2Er(k)σr(A)

    thanks to Lemma 2.3. Similarly, one also can get sinΘ(ˆV,V)(k)2Er(k)σr(A). We have concluded the proof of Theorem 3.1.

    Theorem 3.2. For k{1,2,,d1d2}, define the following class

    Fr(α,β)={(A,E):rank(A)=r,σr(A)α,E(k)β}.

    If r12(d1d2) and α(kr)β, then for any estimators ˜U and ˜V based on the observation matrix A+E, we have

    inf˜Usup(A,E)Fr(α,β)sinΘ(˜U,U)(k)1210βα, (3.1)
    inf˜Vsup(A,E)Fr(α,β)sinΘ(˜V,V)(k)1210βα. (3.2)

    Proof. We only need to show (3.2) since the statement (3.1) can be gotten by similar process. First, we introduce the following singular value decomposition,

    (αβkr00)=(u11u12u21u22)(σ1000)(v11v12v21v22)T=(u11u21)σ1(v11v21)T,

    then by Lemma 3 in [2] and α(kr)β, we know

    |v21|110(kr)βα. (3.3)

    Second, based on the above matrix, the following matrices are constructed.

    A1=(σ1u11v11Irσ1u11v21Ir0σ1u21v11Irσ1u21v21Ir0000d12r,d22r),E1=0d1,d2;A2=(αIr00000000d12r,d22r,),E2=(0βkrIr0000000d12r,d22r).

    Obviously, rank(A1)=rank(A2)=r and

    ˆA=A1+E1=A2+E2=(αIrβkrIr0000000d12r,d22r).

    On the other hand, It is easy to check σr(A1)=σ1(A1)α,(E1)r(k)=0β and σr(A2)=α,(E1)r(k)=krkrββ. Hence, (A1,E1),(A2,E2)Fr(α,β). Let V1,V2 are the leading r singular vector of A1,A2 respectively, then

    V1=(v11Irv21Ir0d22r),V2=(Ir0r0d22r)

    follow from the structure of A1,A2, Therefore, for any estimator ˜V of the leading r right singular space, we have

    inf˜Vsup(A,E)Fr(α,β)sinΘ(˜V,V)(k)inf˜Vmax{sinΘ(˜V,V1)(k),sinΘ(˜V,V2)(k)}12(sinΘ(˜V,V1)(k)+sinΘ(˜V,V2)(k))(1.5)12sinΘ(V1,V2)(k)(1.6)=12v21Ir(k)=12(kr)|v21|(3.3)1210βα.

    The proof of Theorem 3.2 is finished.

    Remark 3.1. In Theorem 3.2, the assumption α(kr)β is necessary to obtain a consistent estimator. In fact, if α(kr)<β, there is no stable algorithm to recover either U or V in the sense that there exists uniform constant 122 such that

    inf˜Usup(A,E)Fr(α,β)sinΘ(˜U,U)(k)122,inf˜Vsup(A,E)Fr(α,β)sinΘ(˜V,V)(k)122.

    Proof. Let

    (αβkr00)=(u11u21)σ1(v11v21)T,

    then by Lemma 3 in [2] and α(kr)<β, we know |v21|12. Therefore, based on the similar discussion of the proof of Theorem 3.2, Remark 3.1 is established.

    Remark 3.2. By Theorem 3.2, we can know that the rates given in Theorem 3.1 are optimal, but the corresponding lower bounds for the singular subspaces are not given in Luo et al. [6].

    In this section, we provide some numerical studies to support our theoretical results. Throughout the simulation studies, we consider the nuclear norm (the sum of all singular values) as the error metric. i.e., k=d1d2. Without loss of generality, we assume d1=d2:=d. In each setting, we randomly generate the perturbation E=uvT+ZRd×d, where u,vRd are randomly generated unit vectors and Z has independent identically distributed N(0,σ) entries. On the other hand, we generate low-rank matrix A=UΣrVT by a special structure. Here U,VRd×r are independently drawn from Od,r uniformly at random; Σr is a r×r diagonal matrix with singular values decaying polynomially as (Σr)ii=10i,1ir. Each simulation setting is repeated for 100 times and the average values are reported. The Figure 1 is the result of numerical studies.

    Figure 1.  Theorem 1.1 bound, upper bound (1.3) and the true value of ˆArA.

    We set d{100,200},r{3,6,9,12,15},σ=0.004. The results of the upper bounds in Theorem 1.1, (1.3) and the true value of ˆArA are given in Figure 1. It shows that the upper bound in Theorem 1.1 is tighter than the upper bound in (1.3) in all setting. Furthermore, the upper bound of Theorem 1.1 remains steady while the upper bound of (1.3) significantly increases when d increases form 100 to 200.

    In this paper, we give a sharp upper bound for rank-r matrix A under Ky Fan norm, and show that it is optimal by establishing the corresponding lower bound. As a byproduct, we provide the perturbation bounds for the singular subspaces under Ky Fan norm sinΘ distance. Furthermore, we give the corresponding lower bound to show its optimality. Finally, we provide numerical studies to support our theoretical results.

    As a unitarily invariant norm, Ky Fan norm which is different from Schatten-q norm is also an important matrix norm. So it makes sense to study the perturbation bound for the low-rank matrix. It is worth mentioning that the approach of proving Lemma 2.3 can be generalized any unitarily invariant norm. Therefore, it can be used to study other perturbation theory in future.

    The authors would like to express their gratitude to the editor and anonymous referees for their constructive and valuable suggestions which improved the paper. This work is supported by the National Natural Science Foundation of China (No. 11771030 and 12171016).

    The authors declare that they have no conflicts of interest.



    [1] T. T. Cai, A. R. Zhang, Rate-optimal perturbation bounds for singular subspaces with applications to high-dimensional statistics, Ann. Statist., 46 (2018), 60–89. https://doi.org/10.1214/17-AOS1541 doi: 10.1214/17-AOS1541
    [2] T. T. Cai, A. R. Zhang, Supplement to "Rate-optimal perturbation bounds for singular subspaces with applications to high-dimensional statistics", 2018. https://doi.org/10.1214/17-AOS1541SUPP
    [3] C. Davis, W. M. Kahan, The rotation of eigenvectors by a perturbation, SIAM J. Numer. Anal., 7 (1970), 1–46. https://doi.org/10.1137/0707001 doi: 10.1137/0707001
    [4] J. Fan, W. Wang, Y. Zhong, An l eigenvector perturbation bound and its applicabtion to robust covariance estimation, J. Mach. Learn. Res., 18 (2018), 1–42.
    [5] J. W. Huang, J. J. Wang, F. Zhang, H. L. Wang, W. D. Wang, Perturbation analysis of low-rank matrix stable recovery, Int. J. Wavelets Multi., 19 (2021), 2050091. https://doi.org/10.1142/S0219691320500915 doi: 10.1142/S0219691320500915
    [6] Y. T. Luo, R. G. Han, A. R. Zhang, A Schatten-q low-rank matrix perturbation analysis via perturbation projection error bound, Linear Algebra Appl., 630 (2021), 225–240. https://doi.org/10.1016/j.laa.2021.08.005 doi: 10.1016/j.laa.2021.08.005
    [7] Y. M. Liu, C. G. Ren, An optimal perturbation bound, Math. Method. Appl. Sci., 42 (2019), 3791–3798. https://doi.org/10.1002/mma.5612 doi: 10.1002/mma.5612
    [8] L. Mirsky, Symmetric gauge functions and unitarily invariant norms, Q. J. Math., 11 (1960), 50–59. https://doi.org/10.1093/qmath/11.1.50 doi: 10.1093/qmath/11.1.50
    [9] G. W. Stewart, J. G. Sun, Matrix perturbation theory, New York: Academic Press, 1990.
    [10] V. Vu, Singular vectors under random perturbation, Random Struct. Algor., 39 (2011), 526–538. https://doi.org/10.1002/rsa.20367 doi: 10.1002/rsa.20367
    [11] R. R. Wang, Singular vector perturbation under Gaussian noise, SIAM J. Matrix Anal. Appl., 36 (2015), 158–177. https://doi.org/10.1137/130938177 doi: 10.1137/130938177
    [12] P. A. Wedin, Perturbation bounds in connection with singular value decomposition, BIT, 12 (1972), 99–111. https://doi.org/10.1007/BF01932678 doi: 10.1007/BF01932678
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1754) PDF downloads(74) Cited by(0)

Figures and Tables

Figures(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog