On identifiability of 3-tensors of multilinear rank (1; Lr; Lr)

  • Published: 01 October 2016
  • In this paper, we study a specific big data model via multilinear rank tensor decompositions. The model approximates to a given tensor by the sum of multilinear rank (1; Lr; Lr) terms. And we characterize the identifiability property of this model from a geometric point of view. Our main results consists of exact identifiability and generic identifiability. The arguments of generic identifiability relies on the exact identifiability, which is in particular closely related to the well-known "trisecant lemma" in the context of algebraic geometry (see Proposition 2.6 in[1]). This connection discussed in this paper demonstrates a clear geometric picture of this model.

    Citation: Ming Yang, Dunren Che, Wen Liu, Zhao Kang, Chong Peng, Mingqing Xiao, Qiang Cheng. On identifiability of 3-tensors of multilinear rank (1; Lr; Lr)[J]. Big Data and Information Analytics, 2016, 1(4): 391-401. doi: 10.3934/bdia.2016017

    Related Papers:

    [1] Zhouchen Lin . A Review on Low-Rank Models in Data Analysis. Big Data and Information Analytics, 2016, 1(2): 139-161. doi: 10.3934/bdia.2016001
    [2] Ugo Avila-Ponce de León, Ángel G. C. Pérez, Eric Avila-Vales . A data driven analysis and forecast of an SEIARD epidemic model for COVID-19 in Mexico. Big Data and Information Analytics, 2020, 5(1): 14-28. doi: 10.3934/bdia.2020002
    [3] Subrata Dasgupta . Disentangling data, information and knowledge. Big Data and Information Analytics, 2016, 1(4): 377-390. doi: 10.3934/bdia.2016016
    [4] Guojun Gan, Qiujun Lan, Shiyang Sima . Scalable Clustering by Truncated Fuzzy c-means. Big Data and Information Analytics, 2016, 1(2): 247-259. doi: 10.3934/bdia.2016007
    [5] Elnaz Delpisheh, Aijun An, Heidar Davoudi, Emad Gohari Boroujerdi . Time aware topic based recommender System. Big Data and Information Analytics, 2016, 1(2): 261-274. doi: 10.3934/bdia.2016008
    [6] Hamzeh Khazaei, Marios Fokaefs, Saeed Zareian, Nasim Beigi-Mohammadi, Brian Ramprasad, Mark Shtern, Purwa Gaikwad, Marin Litoiu .
     How do I choose the right NoSQL solution? A comprehensive theoretical and experimental survey 
    . Big Data and Information Analytics, 2016, 1(2): 185-216. doi: 10.3934/bdia.2016004
    [7] Bill Huajian Yang . Resolutions to flip-over credit risk and beyond-least squares estimates and maximum likelihood estimates with monotonic constraints. Big Data and Information Analytics, 2018, 3(2): 54-67. doi: 10.3934/bdia.2018007
    [8] Tao Wu, Yu Lei, Jiao Shi, Maoguo Gong . An evolutionary multiobjective method for low-rank and sparse matrix decomposition. Big Data and Information Analytics, 2017, 2(1): 23-37. doi: 10.3934/bdia.2017006
    [9] Yaguang Huangfu, Guanqing Liang, Jiannong Cao . MatrixMap: Programming abstraction and implementation of matrix computation for big data analytics. Big Data and Information Analytics, 2016, 1(4): 349-376. doi: 10.3934/bdia.2016015
    [10] Ruiqi Li, Yifan Chen, Xiang Zhao, Yanli Hu, Weidong Xiao . TIME SERIES BASED URBAN AIR QUALITY PREDICATION. Big Data and Information Analytics, 2016, 1(2): 171-183. doi: 10.3934/bdia.2016003
  • In this paper, we study a specific big data model via multilinear rank tensor decompositions. The model approximates to a given tensor by the sum of multilinear rank (1; Lr; Lr) terms. And we characterize the identifiability property of this model from a geometric point of view. Our main results consists of exact identifiability and generic identifiability. The arguments of generic identifiability relies on the exact identifiability, which is in particular closely related to the well-known "trisecant lemma" in the context of algebraic geometry (see Proposition 2.6 in[1]). This connection discussed in this paper demonstrates a clear geometric picture of this model.


    1. Introduction


    1.1. Content of the paper

    The importance and usefulness of tensors that are characterized by multiway arrays for big data sets, has been increasingly recognized in the last decades, as testified by a number of surveys [15,20,14,5,17] and among others. Identifiability property (see [3,10,2,9]), including both exact and generic identifiability, is critical for tensor models in various applications, and widely used in many areas, such as signal processing, statistics, computer science, and so on. For instance, in signal processing, the tensor encodes data from received signals and one needs to decompose the tensor to obtain the transmitted signals. If the uniqueness does not hold, one may not recover the transmitted signals. Therefore, to establish the uniqueness property of appropriate tensor decomposition is not only mathematically interest but also necessary in various real applications. Extensive studies under the framework of algebraic geometry have provided various characteristics involving tensor rank and dimensions to ensure generic identifiability.

    In this paper, we consider the model of low multilinear rank tensor decomposition (LRD). The initial idea was proposed by De Lathauwer [6,7,8], where the rank-1 tensors in CP decomposition is replaced by tensors in multilinear rank (1, Lr, Lr) terms. Such approach allows us to model more complex phenomena and to analyze big data sets with complex structures, especially for the cases that tensor components cannot be represented as rank-1 tensors. We extend the theoretical frameworks by establishing the uniqueness conditions of LRD, which are critical for the applications of tensor-based approaches in handling big data sets. More specifically, if a tensor can be written in a unique manner as a sum of tensors of low multilinear rank, then this decomposition may reveal (meaningful) characteristics that are more general than the components extracted from CP decomposition. The uniqueness property of LRD can be theoretically guaranteed with mild conditions under our framework, and we provide the new uniqueness criterion of multilinear-rank tensor decomposition that closely relates to the applications of LRD in blind source separation in signal processing. The theoretical contributions of establishing the explicit uniqueness criterion of LRD may play significant role in the application domains of tensor-based methods for big data analysis [4].


    1.2. Definitions

    Definition 1.1. (see Chapter Ⅲ in [19]) Let K be a field C or R and let A1,,An be K-vector spaces. The tensor product space A1An is the quotient module K(A1,,An)/R where K(A1,,An) is the free module generated by all n-tuples (a1,,an)A1××An and R is the submodule of K(A1,,An) generated by elements of the form

    (a1,,αak+βak,,an)α(a1,,ak,,an)β(a1,,ak,,an)

    for all ak,akAk,α,βK, and k{1,,n}. We write a1an for the element (a1,,an)+R in the quotient space K/R.

    An element of A1An that can be expressed in the form a1an is called decomposable. The symbol is called the tensor product when applied to vectors from abstract vector spaces.

    The elements of A1An are called order-n tensors and Ik=dimAk, k=1,,n are the dimensions of the tensors.

    If UKl,VKm,WKn, we may identify

    KlKmKn=Kl×m×n

    through the interpretation of the tensor product of vectors as a tensor via the Segre outer product,

    [u1,,ul]T[v1,,vm]T[w1,,wn]T=[uivjwk]l,m,ni,j,k=1.

    Definition 1.2. The Khatria-Rao Product is the "matching columnwise" Segre outer product. Given matrices A=[a1,,aK]KI×K and B=[b1,,bK]KJ×K, their Khatria-Rao product is denoted by AB. The result is a matrix of size (IJ)×K defined by

    AB=[a1b1a2b2aKbK].

    If a and b are vectors, then the Khatria-Rao and Segre outer products are identical, i.e., ab=ab.

    Given standard orthnormal bases e(k)1,,e(k)Ik for AkKIk,k=1,,N, any tensor X in A1ANKI1××IN, can be expressed as a linear combination

    X=I1,,INi1,,iN=1ti1iNe(1)i1e(N)iN.

    In older literature, the ti1iN's are often called the components of X. X has rank one or rank-1 if there exist non-zero a(i)Ai, i=1,,N, so that X=a(1)a(N) and a(1)a(N) is the Segre outer product.

    The rank of X is defined to be the smallest r such that it may be written as a sum of r rank-1 tensors, i.e.,

    rank(X)=min{r:X=rp=1a(1)pa(N)p}.

    Definition 1.3. The n-th flattening map on any tensor X=[ti1iN]I1,,INi1,,iN=1KI1××IN is the function (see Section 2 of [11])

    n:KI1××INKIn×(I1ˆInIN)

    defined by

    (n(X))ij=(X)sn(i,j),

    where sn(i,j) is the j-th element in lexicographic order in the subset of I1××IN consisting of elements that have n-th coordinate equal to i, and by convention a caret over any entry of a N-tuple means that the respective entry is omitted.

    For a tensor X=[tijk]Kl×m×n,

    r1=dimspanK{X1,,Xl},r2=dimspanK{X1,,Xm},r3=dimspanK{X1,,Xn}.

    Here

    Xi=[tijk]m,nj,k=1Km×n,Xj=[tijk]l,ni,k=1Kl×n,Xk=[tijk]l,mi,j=1Kl×m.

    The multilinear rank of XKl×m×n is (r1,r2,r3), with r1,r2,r3 defined above.

    Definition 1.4. (see Definition 11 in [4]) A decomposition of a tensor XKI×J×K in a sum of rank-(1, Lr, Lr) terms, 1rR, is a decomposition of X of the form

    X=Rr=1arXr,

    in which the (J×K) matrix Xr is rank-Lr, 1rR, and no two of Xrs are collinear.

    It is clear that in X=Rr=1arXr one can arbitrarily permute the different rank-(1, Lr, Lr) terms arXr. Also, one can scale Xr, provided that ar is counter scaled. We call this decomposition to be essentially unique when it is only subject to these trivial indeterminacies.

    Definition 1.5. Let μK be the Lebegue measure on KI×R×KJ×K×R. Then X=Rr=1arXr in Definition 1.4 is generically unique if μK=0, where μK is defined by

    μK{(KI×R×KJ×K×R):X=Rr=1arXr is not unique for arKI,XrKJ×K}.

    Note that in Definition 1.4, we could require {ai, 1iR} to be an orthogonal frame:

    Definition 1.6. A decomposition of a tensor XKI×J×K in an orthogonal frame in a sum of rank-(1, Lr, Lr) terms, 1rR, is a decomposition of X of the form

    X=Rr=1arXr.

    As in Definition 1.4, Xr has rank-Lr, 1rR, but we need {ai, 1iR} to be an orthogonal frame.


    1.3. Main results

    The main results of the paper are the following, and their proofs will be given in the following sections:

    Theorem 1.7. Assume IR, X=Rr=1arXr in Definition 1.4 is essentially unique if and only if

    spanK{Xj1, , Xjs}ΣLjt(KJ×K){Xj1, , Xjs}, 1ts,

    where ΣL(KJ×K)={MKJ×K|rank ML}.

    Remark 1. In reasonably small cases, one can use tools from numerical algebraic geometry such as those described in [18,12,13].

    Remark 2. A generic b×b pencil is diagonalizable (as the conditions to have repeated eigenvalues or bounded rank are closed conditions) and thus of rank b. Thus for most (more precisely, a Zariski open subset of) pencils that are not diagonalizable, a perturbation by a general rank one matrix will make it diagonalizable. And there is a normal form for a general point p of ΣL(KJ×K) (L is smaller than J and K), which is

    p=b1c1++bLcL,

    where {b1,,bL},{c1,,cL} are linear independent.

    We now establish a simpler condition related to the uniqueness of X=Rr=1arXr in Definition 1.4. More precisely, there is a set of tensors X of measure 0 such that for any X outside this set, the conditions are sufficient to guarantee the uniqueness of X=Rr=1arXr. Notice that these conditions are not truly sufficient, since it fails to provide the conclusion on a set of problems of measure 0. It, however, illustrates very well the situations in which uniqueness should hold.

    Theorem 1.8. X=a1X1+a2X2 in Definition 1.4 is generically unique if and only if

    I2, J=Kone of {2L1+L22,2L2+L12,L1,L2}.

    Theorem 1.9. X=Rr=1arXr in Definition 1.4 is generically unique if

    IR, KRr=1Lr, J2max{Li}, (Jmax{Li})R,Li+Lj>Lk

    for all 1i,j,kR.

    For low multilinear rank decomposition in orthogonal frame, we have the following theorem.

    Theorem 1.10. A tensor decomposition of XKI×J×K,

    X=Rr=1arXr,

    as in Definition 1.6 is essentially unique if and only if for any non-identity special orthogonal matrix E=[εij]1i,jR, there exists k, 1kR such that

    rank (εk1X1++εkRXR)L1,,LR.

    1.4. Outline of the paper

    In this paper, we first provide some known and preliminary results related to the tensor decompositions of multilinear rank (1, Lr, Lr) terms that we are considering. Then we establish simple geometric necessary and sufficient conditions which guarantee the uniqueness of tensor decompositions of multilinear rank (1, Lr, Lr) terms (see Theorem 1.7). The conditions are then relaxed to obtain simpler sufficient conditions Theorem 1.8 and Theorem 1.9. Finally, we discuss the uniqueness of tensor decompositions of multilinear rank (1, Lr, Lr) terms in an orthogonal frame that provides better structures.


    2. Algebraic criteria of uniqueness

    Definition 2.1. For a vector space V, V denotes the dual space of linear functionals of V, which is the vector space whose elements are linear maps from V to K: {α:VK|α is linear}. If one is working in bases and represents elements of V by column vectors, then elements of V are naturally represented by row vectors and the map Vα,v is just row-column matrix multiplication. Given a basis v1,,vv of V, it determines a basis α1,,αv of V by αj,vi=δij, called the dual basis. Now we define V={αV|α,v=0, vV}.


    2.1. Proof of Theorem 1.7

    Proof. Assume the contrary that X=Rr=1arXr is different from X=Rr=1arXr. Since a1,,aR are independent, we claim that

    arspanK{a1,,aR}.

    Since if not, we have

    arspanK{a1,,aR}.

    This implies

    X, ar=0=Xr,

    which is a contradiction. Therefore, we have ar=Rj=1αrjaj, where αrj are not all zero. From

    X=Rr=1arXr=Rr=1(Rj=1αrjarXj),

    we know that Xr=Rj=1αrjXj. Taking the inverse of the nonsingular R×R matrix [αrj], we have Xr=Rj=1˜αrjXj. Consequently, there exist r,j1,j2{1,,R} such that j1j2 and ˜αrj1˜αrj20. Therefore, we obtain

    XrspanK{Xj1,,Xjs}ΣLr(KJ×K).

    But Xr does not belong to {Xj1,,Xjs}, which is a contradiction.

    If there exists XjtspanK{Xj1,,Xjs}ΣLjt(KK×J) such that Xjt{Xj1,,Xjs}, Without loss of generality, we assume that Xjt=X1+χ2X2++χRXR. Now

    a1X1++aRXR=a1Xjtχ2a1X2χRa1XR+a2X2++aRXR=a1Xjt+(a2χ2a1)X2++(aRχRa1)XR=a1Xjt+a2X2++aRXR.

    So X=Rr=1arXr is not unique.

    Example 1. A tensor decomposition of XKI×J×K,

    X=Rr=1arXr,

    as in Definition 1.4 is essentially unique if the singular vectors of X1,,XR are linear independent.

    Proof. Assume the contrary that X=Rr=1arXr is different from X=Rr=1arXr, then we have

    Xr=χ1X1++χRXR.

    Let

    Ur=(|||ur1ur2urJ|||)
    Vr=(|||vr1vr2vrK|||)

    and urj, 1rR, 1jJ, vrk, 1rR, 1kK, are linear independent, and let

    Xr=σr1ur1vr1++σrLrurLrvrLr,

    then we can see the rank of χ1X1++χRXR should be bigger or equal to Li,1iR and equality holds only if Xr is one of {X1,,XR}. And the uniqueness follows from Theorem 1.7.


    2.2. Proof of Theorem 1.8

    Proof. It is sufficient to prove the case min{L1,L2}J=K<L1+L2. Let B and C denote vector spaces of dimensions J,K respectively. Split B=B1B0B2 and C=C1C0C2, where B1, B0, B2, C1, C0, and C2 are of dimensions L1lb, lb, L2lb, L1lc, lc, L2lc.

    Consider

    X1=b1,1c1,1++b1,L1lbc1,L1lb+b0,1c1,L1lb+1++b0,lbc0,lc(B1B0)(C1C0)KL1KL1,X2=b2,1c2,1++b2,L1lbc2,L1lb+b0,1c2,L1lb+1++b0,lbc0,lc(B2B0)(C2C0)KL2KL2,

    where

    {b0,1,,b0,lb},{b1,1,,b1,L1lb},{b2,1,,b2,L2lb},{c0,1,,c0,lc},{c1,1,,c1,L1lc},{c2,1,,c2,L2lc},

    are bases for B0, B1, B2, C0, C1 and C2, respectively, J+lb=L1+L2, and K+lc=L1+L2. Suppose χ1, χ2 are both nonzero, the matrix pencil χ1X1+χ2X2

    (χ1χ1χ1+χ2χ1+χ2χ2χ2)

    has rank J when χ1χ2, and L1+L22lb when χ1=χ2. By simple computation, we know

    rank (χ1X1+χ2X2)L1 or L2

    if and only if

    J,Kone of {2L1+L22,2L2+L12,L1,L2}.

    Then Theorem 1.8 follows from Theorem 1.7.

    Example 2. For XK2×3×3, considering the decomposition in a sum of multilinear rank (1,2,2), we have

    X=a1(b1c1+b2c2)+a2(b2c2+b3c3)=a1(b1c1b3c3)+(a1+a2)(b2c2+b3c3),

    where {b1,b2,b3},{c1,c2,c3},{a1,a2} are bases for K3,K3,K2. So this is not unique.

    Example 3. For XK2×4×2, considering the decomposition in a sum of multilinear rank (1,2,2), we have

    X=a1(b1c1+b2c2)+a2(b3c1+b4c2)=a1((b1+b3)c1+(b2+b4)c2)+(a2a1)(b3c1+b4c2),

    where {b1,b2,b3,b4},{c1,c2},{a1,a2} are basis for K4,K2,K2. So this is not unique.

    Example 4. There are explicit Weierstrass canonical forms (see Chapter 10 in [16]) of tensors in K2×L×L. Each of those can be decomposed in a sum of rank-(1,L,L) terms as follows:

    a1(b1c1++bLcL)+a2(λ1b1c1++λLbLcL),

    but it is obviously not unique.


    2.3. Proof of Theorem 1.9

    Proof. It is sufficient to prove the case I=R, K=Rr=1Lr. Let B and C denote vector spaces of dimensions J, K respectively. Choose the splitting of C as C=1rRCr, and fix a basis {b1,,bJ} for B.

    Without loss of generality, for 1pR, we can assume

    Ejp=bjp,1cjp,1+bjp,2cjp,2++bjp,Ljpcjp,LjpBjpCjp,

    where {bjp,1,,bjp,Ljp}{b1,,bJ} (since J2max{Li},(Jmax{Li})R), {cjp,1,,cjp,Ljp} are bases for Bjp, Cjp, respectively. Further, let

    Ejt=b1c1++bLjtcLjt

    be a general point of ΣLjt(KJ×K) and set

    Ejt=1psχpEjp=1psχp(bjp,1cjp,1+bjp,2cjp,2++bjp,Ljpcjp,Ljp).

    If there exist χμ, χν, which are both nonzero, the pencil

    (xμxμxμ+xνxμ+xνxνxν)

    has rank at least Ljμ+Ljν, which is bigger than Ljt. This implies that Ejt is not a matrix in ΣLjt(KJ×K). Therefore, we prove that Ejt{Ej1,,Ejs}. The uniqueness follows from Theorem 1.7.

    The following Remark can be easily obtained using elementary combinatorics.

    Remark 3. When

    IR, J, KRr=1Lr, Li+Lj>Lk 1i,j,kR,

    a low multilinear rank tensor decomposition of X as in Definition 1.4 has a unique expression

    X=Rr=1ar[(ru=1Lur=1+r1u=1Lubrcr)].

    3. Proof of Theorem 1.10

    Proof. Assume the contrary that X=Rr=1arXr is different from X=Rr=1arXr. Let us assume the transformation matrix between the frames {er, 1rR} and {er, 1rR} is Q, which is a R×R special orthogonal matrix [εir]. Then we have

    [X1XR][e1eR]=[X1XR][e1eR]=[X1XR]Q[e1eR].

    Since {er, 1rR} is orthogonal, taking inner product of X with er, we have

    Xr=εr1X1++εrRXR, 1rR.

    However

    rank Xr=rank (εr1X1++εrRXR)L1,,LR,

    which is a contradiction. Therefore X=Rr=1arXr as in Definition 1.6 is essentially unique. Assume for a special orthogonal matrix Q=[εir]R×R, εi1X1++εiRXR has rank Li for any 1iR. Let

    Xi=εi1X1++εiRXR, 1iR,

    we then have X=Rr=1arXr. So it is not unique.

    Remark 4. Since the rotation matrix in the plane is

    (cos θsin θsin θ+cos θ),

    a tensor decomposition of XKI×J×K in orthogonal frame, X=a1X1+a2X2 as in Definition 1.6 is essentially unique if and only if for any θ, 0<θ<π,

    rank (cosθ X1+sinθ X2)L1 or L2,

    and same for rank (sinθ X1+cosθ X2).


    4. Conclusion

    Different from most current approach in the analysis of big data sets, in this paper, some uniqueness characteristics of low multilinear rank tensor decomposition LRD are given under the framework of algebraic geometry. The proposed framework leads to a new approach for the study of identifiability properties in terms of block tensor decomposition that can be used to handle the big data sets. Several explicit uniqueness criteria for tensor decomposition of low multilinear rank terms are given.


    Acknowledgments

    The first author Ming Yang is grateful to Mingqing Xiao for his insight and for clarity of proofs. The first author is also grateful to the Qiang Cheng's machine learning lab, where this paper was mainly written.


    [1] [ L. Chiantini and C. Ciliberto, Weakly defective varieties, Transactions of the American Mathematical Society, 354(2002), 151-178.
    [2] [ L. Chiantini and G. Ottaviani, On generic identifiability of 3-tensors of small rank, SIAM Journal on Matrix Analysis and Applications, 33(2012), 1018-1037.
    [3] [ L. Chiantini, G. Ottaviani and N. Vannieuwenhoven, An algorithm for generic and low-rank specific identifiability of complex tensors, SIAM Journal on Matrix Analysis and Applications, 35(2014), 1265-1287.
    [4] [ A. Cichocki, D. Mandic, L. De Lathauwer, G. Zhou, Q. Zhao, C. Caiafa and H. Phan, Tensor decompositions for signal processing applications:From two-way to multiway component analysis, Signal Processing Magazine, IEEE, 32(2015), 145-163.
    [5] [ P. Comon, Tensor decompositions, State of the art and applications, J. G. McWhirter and I. K. Proudler. Mathematics in Signal Processing V, Clarendon Press, Oxford, 71(2002), 1-24.
    [6] [ L. DeLathauwer, Decompositions of a higher-order tensor in block terms-part I:Lemmas for partitioned matrices, SIAM Journal on Matrix Analysis and Applications, 30(2008), 1022-1032.
    [7] [ L. DeLathauwer, Decompositions of a higher-order tensor in block terms-part Ⅱ:Definitions and uniqueness, SIAM Journal on Matrix Analysis and Applications, 30(2008), 1033-1066.
    [8] [ L. DeLathauwer and D. Nion, Decompositions of a higher-order tensor in block terms-part Ⅲ:Alternating least squares algorithms, SIAM Journal on Matrix Analysis and Applications, 30(2008), 1067-1083.
    [9] [ I. Domanov and L. DeLathauwer, Generic uniqueness of a structured matrix factorization and applications in blind source separation, IEEE Journal of Selected Topics in Signal Processing, 10(2016), 701-711.
    [10] [ I. Domanov and L. De Lathauwer, Generic uniqueness conditions for the canonical polyadic decomposition and INDSCAL, SIAM Journal on Matrix Analysis and Applications, 36(2015), 1567-1589.
    [11] [ J. Draisma and J. Kuttler, Bounded-rank tensors are defined in bounded degree, Duke Mathematical Journal, 163(2014), 35-63.
    [12] [ J. D. Hauenstein and A. J. Sommese, Witness sets of projections, Applied Mathematics and Computation, Elsevier, 217(2010), 3349-3354.
    [13] [ J. D. Hauenstein and A. J. Sommese, Membership tests for images of algebraic sets by linear projections, Applied Mathematics and Computation, Elsevier, 219(2013), 6809-6818.
    [14] [ R. Ke, W. Li and M. Xiao, Characterization of extreme points of multi-stochastic tensors, Computational Methods in Applied Mathematics, 16(2016), 459-474.
    [15] [ T. G. Kolda and B. W. Bader, Tensor decompositions and applications, SIAM Review, 51(2009), 455-500.
    [16] [ J. M. Landsberg, Tensors:Geometry and Applications, AMS, Providence, Rhode Island, USA, 2012.
    [17] [ M. W. Mahoney, L.-H. Lim and G. E. Carlsson, Algorithmic and statistical challenges in modern largescale data analysis are the focus of MMDS 2008, ACM SIGKDD Explorations Newsletter, 10(2008), 57-60.
    [18] [ F. Malgouyres and J. Landsberg, Stable recovery of the factors from a deep matrix product and application to convolutional network, preprint, arXiv:1703.08044.
    [19] [ Y. Matsushima (E. Kobayashi, translator), Differentiable Manifolds, Marcel Dekker, Inc., North-Holland Publishing Co., North Miami Beach, FL, U.S.A., 1972.
    [20] [ E. E. Papalexakis, C. Faloutsos and N. D. Sidiropoulos, Tensors for data mining and data fusion:Models, applications, and scalable algorithms, ACM Transactions on Intelligent Systems and Technology (TIST), 8(2017), 1-44.
  • Reader Comments
  • © 2016 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3323) PDF downloads(518) Cited by(0)

Article outline

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog