Citation: Zhouchen Lin. A Review on Low-Rank Models in Data Analysis[J]. Big Data and Information Analytics, 2016, 1(2): 139-161. doi: 10.3934/bdia.2016001
[1] | Tao Wu, Yu Lei, Jiao Shi, Maoguo Gong . An evolutionary multiobjective method for low-rank and sparse matrix decomposition. Big Data and Information Analytics, 2017, 2(1): 23-37. doi: 10.3934/bdia.2017006 |
[2] | Guojun Gan, Kun Chen . A Soft Subspace Clustering Algorithm with Log-Transformed Distances. Big Data and Information Analytics, 2016, 1(1): 93-109. doi: 10.3934/bdia.2016.1.93 |
[3] | Ming Yang, Dunren Che, Wen Liu, Zhao Kang, Chong Peng, Mingqing Xiao, Qiang Cheng . On identifiability of 3-tensors of multilinear rank (1; Lr; Lr). Big Data and Information Analytics, 2016, 1(4): 391-401. doi: 10.3934/bdia.2016017 |
[4] | John A. Doucette, Robin Cohen . A testbed to enable comparisons between competing approaches for computational social choice. Big Data and Information Analytics, 2016, 1(4): 309-340. doi: 10.3934/bdia.2016013 |
[5] | Ugo Avila-Ponce de León, Ángel G. C. Pérez, Eric Avila-Vales . A data driven analysis and forecast of an SEIARD epidemic model for COVID-19 in Mexico. Big Data and Information Analytics, 2020, 5(1): 14-28. doi: 10.3934/bdia.2020002 |
[6] | Marco Tosato, Jianhong Wu . An application of PART to the Football Manager data for players clusters analyses to inform club team formation. Big Data and Information Analytics, 2018, 3(1): 43-54. doi: 10.3934/bdia.2018002 |
[7] | Jinyuan Zhang, Aimin Zhou, Guixu Zhang, Hu Zhang . A clustering based mate selection for evolutionary optimization. Big Data and Information Analytics, 2017, 2(1): 77-85. doi: 10.3934/bdia.2017010 |
[8] | Weidong Bao, Haoran Ji, Xiaomin Zhu, Ji Wang, Wenhua Xiao, Jianhong Wu . ACO-based solution for computation offloading in mobile cloud computing. Big Data and Information Analytics, 2016, 1(1): 1-13. doi: 10.3934/bdia.2016.1.1 |
[9] | Maria Gabriella Mosquera, Vlado Keselj . Identifying electronic gaming machine gambling personae through unsupervised session classification. Big Data and Information Analytics, 2017, 2(2): 141-175. doi: 10.3934/bdia.2017015 |
[10] | Wenxue Huang, Yuanyi Pan . On Balancing between Optimal and Proportional categorical predictions. Big Data and Information Analytics, 2016, 1(1): 129-137. doi: 10.3934/bdia.2016.1.129 |
Sparse representation and compressed sensing has achieved tre-mendous success in practice. They naturally fit for order-one data, such as voices and feature vectors. However, in applications we are often faced with various types of data, such as images, videos, and genetic microarrays. They are inherently matrices or even tensors. Then we are naturally faced with a question: how to measure the sparsity of matrices and tensors?
Low-rank models are recent new tools that can robustly and efficiently handle high-dimensional data. Although rank has been used in statistics as a regularizer of matrices, e.g., reduced rank regression (RRR) [61], and in three-dimensional stereo vision [50], rank constraints are ubiquitous, the surge of low-rank models in recent years was inspired by sparse representation and compressed sensing. There has been systematic development on new theories and applications. In this background, rank is interpreted as the measure of the second order (i.e., matrix) sparsity1, rather than merely a mathematical concept. To illustrate this, we take image and video compression as an example. To achieve effective compression, we have to fully utilize the spatial and temporal correlation in images or videos. Take the Netflix challenge2 (Figure 1) as another example, to infer the unknown user ratings on videos, one has to consider both the correlation between users and the correlation between videos. Since the correlation among columns and rows is closely connected to matrix rank, it is natural to use rank as a measure of the second order sparsity.
1 The first order sparsity is the sparsity of vectors, whose measure is the number of nonzeros, i.e., the
2 Netflix is a video-renting company, which owns a lot of users' ratings on videos. The user/video rating matrix is very sparse. The Netflix company offered one million US dollars to encourage improving the prediction on the user ratings on videos by 10%. See https://en.wikipedia.org/wiki/Netflix_Prize
In the following, I review the recent development on low-rank models3. I first introduce linear models in Section 2, then nonlinear ones in Section 3, where the former are classified as single subspace models and multi-subspace ones. Theoretical analysis on some linear models, including exact recovery, closed-form solutions, and block-diagonality structure, is also provided in Section 2. Then I introduce commonly used optimization algorithms for solving low-rank models in Section 4, which can be classified as convex, non-convex, and randomized ones. Next, I review representative applications in Section 5. Finally, I conclude the paper in Section 6.
3 There has been an excellent review on low-rank models in image analysis by Zhou et al. [82]. However, my review differs significantly from [82]. My review introduces much more low-rank models, e.g., tensor completion and recovery, multi-subspace models, and nonlinear models, while [82] mainly focuses on matrix completion and Robust PCA. My review also provides theoretical analysis and randomized algorithms.
The recent boom of low-rank models started from the matrix completion (MC) problem [6] proposed by E. Candès in 2009. We introduce linear models first. Although they look simple, theoretical analysis show that linear models are very robust to strong noises and missing values. In real applications, they also have sufficient data representation power.
Single subspace models are to extract one overall subspace from data. The most famous one may be the MC problem, proposed by E. Candès. It is as follows. Given the values of a matrix
minArank(A),s.t.πΩ(A)=πΩ(D), | (1) |
where
minArank(A),s.t.‖πΩ(A)−πΩ(D)‖2F≤ε, | (2) |
in order to handle the case when the observed data are noisy, where
When considering the low-rank recovery problem in the case of strong noises, it seems that this problem is well solvable by the traditional Principal Component Analysis (PCA). However, the traditional PCA is effective in accurately recovering the underlying low-rank structure only when the noises are Gaussian. If the noises are non-Gaussian and strong, even a few outliers can make PCA fail. Due to the great importance of PCA in applications, many scholars spent a lot effort on robustifying PCA, proposing many so-called "robust PCAs." However, none of them has a theoretical guarantee that under certain conditions the underlying low-rank structure can be exactly recovered. In 2009, Chandrasekaran et al.[7] and Wright et al. [68] proposed Robust PCA (RPCA) simultaneously. The problem they considered is how to recover the low-rank structure when the data have sparse and large outliers:
minA,Erank(A)+λ‖E‖0,s.t.A+E=D, | (3) |
where
minA,Erank(A)+λ‖E‖0,s.t.πΩ(A+E)=πΩ(D). | (4) |
In their paper, they also discussed a generalized RPCA model which involves dense Gaussian noises [4]:
minA,Erank(A)+λ‖E‖0,s.t.‖πΩ(A+E)−πΩ(D)‖2F≤ε. | (5) |
Chen et al. [9] considered the case that noises cluster in sparse columns and proposed the Outlier Pursuit model, which replaces
When the data are tensor-like, Liu et al. [42] generalized matrix completion to tensor completion. Although tensors have a mathematical definition of rank, which is based on the CP decomposition [31], it is not computable. So Liu et al. proposed a new rank for tensors, which is defined as the sum of the ranks of matrices unfolded from the tensor in different modes. Their tensor completion model is thus: given the values of a tensor at some entries, recover the missing values by minimizing this new tensor rank. Also using the same new tensor rank, Tan et al. [60] generalized RPCA to tensor recovery. Namely, given a tensor, decompose it as a sum of two tensors, one having a low new tensor rank, the other being sparse.
There are also matrix factorization based models, such as nonnegative matrix factorization [34]. Such models could be casted as low-rank models. However, they are better viewed as optimization techniques, as mentioned at the end of Section 4.2. So I will not elaborate them here. Interested readers may refer to several excellent reviews on matrix factorization based methods, e.g., [11,59,65].
To sum up, single-subspace models could be viewed as extensions of the traditional PCA, which is mainly for denoising data and finding common components.
MC and RPCA can only extract one subspace from data. They cannot describe finer details of data within this subspace. The simplest case of finer structure is the multi-subspace model, i.e., data distribute around some subspaces. We need to find these subspaces. This problem is called the Generalized PCA (GPCA) problem [62] or subspace clustering [63], which has a lot of solution methods, such as the algebraic method and RANSAC [63], but none of them have a theoretical guarantee. The emergence of sparse representation offers a new way to this problem. In 2009, E. Elhamifar and R. Vidal proposed the key idea of self-representation, i.e., using other samples to represent every sample. Based on self-representation, they proposed the Sparse Subspace Clustering (SSC) model [14,15] such that the representation matrix is sparse:
minZ,Erank(Z)+λ‖E‖0,s.t.D=DZ+E,diag(Z)=0, | (6) |
where the constraint
minZ,Erank(Z)+λ‖E‖2,0,s.t.D=DZ+E. | (7) |
The reason of enforcing the low-rankness of
4 In their later work [38], Liu et al. changed to use
LRR requires that the samples are sufficient. In the case of insufficient samples, Liu and Yan [41] proposed the Latent LRR model:
minZ,L,Erank(Z)+rank(L)+λ‖E‖0,s.t.D=DZ+LD+E. | (8) |
They call
minZ,˜Z,E‖Z−˜Z‖2F+λ‖E‖2,0,s.t.D=DZ+E,rank(˜Z)≤r, | (9) |
where
To further improve the accuracy of subspace clustering, Lu et al. [45] proposed using Trace Lasso to regularize the representation vector:
minZ,E‖Ddiag(Zi)‖∗+λ‖Ei‖0,s.t.Di=DZi+Ei,i=1,⋯,n, | (10) |
where
‖Zi‖2≤‖Ddiag(Zi)‖∗≤‖Zi‖1. |
Moreover, the left hand side is achieved when the data are completely correlated (the columns being the same vector or the negative of the vector), while the right hand side is achieved when the data are completely uncorrelated (the columns being orthogonal). Therefore, Trace Lasso has the characteristic of being adaptive to the correlation among samples. This model is called Correlation Adaptive Subspace Segmentation (CASS).
For better clustering of tensor data, Fu et al. proposed the Tensor LRR model [19], so as to fully utilize the information of tensor in different modes.
In summary, multi-subspace models can model the data structure much better than the single-subspace ones. Their main purpose is to cluster data, drastically in contrast to that of single-subspace ones, i.e., to denoise data.
The theoretical analysis on low-rank models is relatively rich. It consists of the following three parts.
The above-mentioned low-rank models are all discrete optimization problems, most of which are NP-hard, which incurs great difficulty in efficient solution. To overcome this difficulty, a common way is to approximate discrete low-rank models as convex programs. Roughly speaking, the convex function (over the unit ball of
When data are noisy, it is inappropriate to use the noisy data to represent the data themselves. A more reasonable way is to denoise the data first and then apply self-representation on the denoised data, resulting in modified LRR and Latent LRR models:
minZ,A,E‖Z‖∗+λ‖E‖2,1,s.t.D=A+E,A=AZ, | (11) |
and
minZ,L,A,E‖Z‖∗+‖L‖∗+λ‖E‖1,s.t.D=A+E,A=AZ+LA. | (12) |
By utilizing the closed-form solutions discovered in the following subsection, Zhang et al. [76] proved that the solutions of modified LRR and Latent LRR can be expressed as that of corresponding RPCA models:
minA,Erank(A)+λ‖E‖2,1,s.t.D=A+E, | (13) |
and
minA,Erank(A)+λ‖E‖1,s.t.D=A+E, | (14) |
respectively. So the exact recovery results of RPCA [4] and Outlier pursuit [9,74] can be applied to the modified LRR and Latent LRR models, where again only the column space of
An interesting property of low-rank models is that they may have closed-form solutions when the data are noiseless. In comparison, sparse models do not have such a property. Wei and Lin [66] analyzed the mathematical properties of LRR. They first found that the noiseless LRR model:
minZ‖Z‖∗,s.t.D=DZ, | (15) |
has a unique closed-form solution. Let the skinny SVD of
minZ‖Z‖∗,s.t.D=BZ, | (16) |
also has a unique closed-form solution:
Multi-subspace clustering models all result in a representation matrix
The grouping effect among the representation coefficients, i.e., when the samples are similar their representation coefficient vectors should also be similar, is also helpful for maintaining the block-diagonal structure of the representation matrix
To conclude, linear models are relatively simple yet powerful enough to model complex data distributions. They can also have good mathematical properties and theoretical guarantees.
Linear models assume that the data distribute near some low-dimensional subspaces. Such assumption can be easily violated in real applications. So developing nonlinear models is necessary. However, low-rank models for clustering nonlinear manifolds are relatively few. A natural idea is to utilize the kernel trick, proposed by Wang et al. [64]. The idea is as follows. Suppose that via a nonlinear mapping
minZ‖ϕ(X)−ϕ(X)Z‖2F+λ‖Z‖∗. |
Since
The other heuristic approach is to add Laplacian or hyper-Laplacian to the corresponding linear models. It is claimed that Laplacian or hyper-Laplacian can capture the nonlinear geometry of the data distribution. For example, Lu et al. [49] added the Laplacian regularization
Although the modifications on linear models result in more powerful nonlinear models, it is hard to analyze their properties. So their performance may heavily depend on the choice of parameters.
Once we have a mathematical model, we need to solve it efficiently. The discrete low-rank models in Section 2 are mostly NP-hard. So most of the time they could only be solved approximately. A common way is to convert them into continuous optimization problems. There are two ways to do so. The first way is to convert them into convex programs. For example, as mentioned above, one may replace the
Convex optimization is a relatively mature field. There are a lot of polynomial complexity algorithms, such as interior point methods. However, for large scale or high dimensional data, we often need
Currently, all the optimization methods for large scale computing are first order methods. Representative algorithms include Accelerated Proximal Gradient (APG) [2,54], the Frank-Wolfe algorithm [18,26], and the Alternating Direction Method (ADM) [36,37].
APG is basically for unconstrained problems:
minxf(x), | (17) |
where the objective function is convex and
‖∇f(x)−∇f(y)‖≤Lf‖x−y‖,∀x,y. | (18) |
The convergence rate of traditional gradient descent can only be
xk=yk−L−1f∇f(yk),tk+1=1+√1+4t2k2,yk+1=xk+tk−1tk+1(xk−xk−1), | (19) |
where
minxg(x)+f(x), | (20) |
where
minxf(x),s.t.A(x)=b, | (21) |
where
minxf(x)+β2‖A(x)−b‖2, | (22) |
then solve (22) by APG. To speed up, the penalty parameter
For problems with a convex set constraint:
minxf(x),s.t.x∈C, | (23) |
where
gk=argming∈C⟨g,∇f(xk)⟩,xk+1=(1−γk)xk+γkgk,where γk=2k+2, | (24) |
can be used to solve (23). In particular, when the constraint set
ADM fits for convex problems with separable objective functions and linear or convex-set constraints:
minx,yf(x)+g(y),s.t.A(x)+B(y)=c, | (25) |
where
L(x,y,λ)=f(x)+g(y)+ <λ,A(x)+B(y)−c+β2‖A(x)+B(y)−c‖2, | (26) |
where
xk+1=argminxL(x,yk,λk),yk+1=argminyL(xk+1,y,λk). | (27) |
Finally, ADM updates the Lagrange multiplier [37]:
λk+1=λk+β(A(xk+1)+B(yk+1)−c). | (28) |
The advantage of ADM is that its subproblems are simpler than the original problem. They may even have closed-form solutions. When the subproblems are not easily solvable, one may consider approximating the squared constraint
When solving the low-rank models with convex surrogates, we often face with the following subproblem:
minX‖X‖∗+α2‖X−W‖2F, |
which has a closed-form solution [3]. Suppose that the SVD of
Θε(x)={x−ε,if x>ε,x+ε,if x<−ε,0,if −ε≤x≤ε. | (29) |
So solving low-rank models with nuclear norm, SVD is often indispensable. For
Convex algorithms have the advantage of being independent of initialization. However, the quality of their solutions may not be good enough. So exploring nonconvex algorithms is another hot topic in low-rank models.
Nonconvex algorithms trade the initialization independency for better solution quality and possibly faster speed as well. For unconstrained problems which use the Schatten-
minXn∑i=1g(σi(X))+α2‖X−W‖2F, |
where
minXn∑i=1‖X‖w,∗+α2‖X−W‖2F, |
does not have a closed-form solution. Instead, a small-scale optimization w.r.t. the singular values need to be solved numerically.
The third kind of methods for low-rank problems is to represent the expected low-rank matrix
Nonconvex algorithms for low-rank models are much richer than convex ones. The price paid is that their performance may heavily depend on initialization. In this case, prior knowledge is important for proposing a good initialization.
All the above-mentioned methods, no matter for convex or non-convex problems, their computation complexity is at least
Randomized algorithms could bring down the order of computation complexity. However, designing randomized algorithms often needs to consider the characteristic of the problems.
Low-rank models have found wide applications in data analysis and machine learning. For example, there have been a lot of papers on NIPS 2011 which discussed low-rank models. Below I introduce some representative applications.
Image and video denoising can be conveniently formulated as a matrix completion problem. In [29], Ji et al. first broke each of the video frames into patches, then grouped the similar patches. For each group, the patches are reshaped into vectors and assembled into a matrix. Next, the unreliable (noisy) pixels are detected as those whose values deviate from the means of their corresponding row vectors and the remaining pixels are considered reliable (noiseless). The unreliable pixel values can be estimated by applying the matrix completion model (2) to the matrix by marking them as missing values. After denoising all the patches, each frame is restored by averaging the pixel values in overlapping patches. Part of the results of video denoising are shown in Figure 2.
In document analysis, it is important to extract keywords from documents. Let
Background modeling is to separate the background and the foreground from a video. The simplest case is that the video is taken by a fixed video camera. It is easy to see that the background hardly changes. So if putting each frame of the background as a column of a matrix, then the matrix should be of low rank. As the foreground consists of moving objects, it often occupies only a small portion of pixels. So the foreground corresponds to the sparse "noise" in the video. So we can obtain the RPCA model (3) for background modeling, where each column of
The RPCA model for background modeling has to assume that the background has been aligned so as to obtain a low-rank background video. In the case of misalignment, we may consider aligning the frames via appropriate geometric transformation. So the mathematical model is:
minτ,A,E‖A‖∗+λ‖E‖1,s.t.D∘τ=A+E, | (30) |
where
minΔτk,A,E‖A‖∗+λ‖E‖1,s.t.D∘τk+JΔτk=A+E, | (31) |
then add
The purpose of Transform Invariant Low-rank Textures (TILT) is to rectify an image patch
In principle, TILT should work for any parameterized transformations. Zhang et al. [79] further considered TILT under generalized cylindrical transformations, which can be used for texture unwarping from buildings. Some examples are shown in Figure 7.
TILT is also widely applied to geometric modeling of buildings [78], camera self-calibration, and lens distortion auto-correction [80]. Due to its importance in applications, Ren and Lin [58] proposed a fast algorithm for TILT to speed up its solution by more than five times.
Motion segmentation means to cluster the feature points on moving objects in a video, such that each cluster corresponds to an independent object. Then an object can be identified and tracked. For each feature point, its feature vector consists of its image coordinate in each frame and is a column of the data matrix
Image segmentation is to partition an image into homogenous regions. It can be viewed as a special clustering problem. Cheng et al. [10] proposed to oversegment the image into superpixels, then extract usual features from the superpixels. Next, they fused multiple features via an integrated LRR model, where basically each feature corresponds to an LRR model. After obtaining the global representation matrix
Gene clustering is to group genes with similar functionality. Identifying gene clusters from the gene expression data is helpful for the discovery of novel functional gene interactions. Let
Saliency detection is to detect the visually salient regions in an image without understanding the content of the image. Motion segmentation, image segmentation, and gene clustering all utilize the representation matrix
There have been many other applications of low-rank models, such as partial duplicate image search [70], face recognition [57], structured texture repairing [35], man-made object upright orientation [30], photometric stereo [69], image tag refinement [83], robust visual domain adaption [28], robust visual tracking [77], feature extraction from 3D faces [52], ghost image removal in computed tomography [21], semi-supervised image classification [84], image set co-segmentation [53], and even audio analysis [53,55], protein-gene correlation analysis, network flow abnormality detection, robust filtering and system identification. Due to the space limit, I omit their introductions.
Low-rank models have found wide applications in many fields, including signal processing, machine learning, and computer vision. In a few years, there has been rapid development in theories, algorithms, and applications on low-rank models. This review is only a sketchy introduction to this dynamic research topic. Many real problems, if combining the characteristic of problem with proper low-rankness constraints, very often we could obtain better results. In some problems, the raw data may not have a low-rank property. However, the low-rankness could be enhanced by incorporating appropriate transforms (like the improvement of RASL/TILT over RPCA). Some scholars did not check whether the data have low-rank property or do proper pre-processing before claiming that low-rank constraints do not work well. This should be avoided. From the above review, we can see that low-rank models still lack research in the following aspects: generalization from matrices to tensors, nonlinear manifold clustering, and low-complexity (polylog
Z. Lin is supported by NSF China (grant nos. 61272341 and 61231002), 973 Program of China (grant no. 2015CB352502), and Microsoft Research Asia Collaborative Research Program.
[1] | [ A. Adler, M. Elad and Y. Hel-Or, Probabilistic subspace clustering via sparse representations, IEEE Signal Processing Letters, 20(2013), 63-66. |
[2] | [ A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM Journal on Imaging Sciences, 2(2009), 183-202. |
[3] | [ J. Cai, E. Candès and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM Journal on Optimization, 20(2010), 1956-1982. |
[4] | [ E. Candès, X. Li, Y. Ma and J. Wright, Robust principal component analysis?, Journal of the ACM, 58(2011), Art. 11, 37 pp. |
[5] | [ E. Candès and Y. Plan, Matrix completion with noise, Proceedings of the IEEE, 98(2010), 925-936. |
[6] | [ E. Candès and B. Recht, Exact matrix completion via convex optimization, Foundations of Computational Mathematics, 9(2009), 717-772. |
[7] | [ V. Chandrasekaran, S. Sanghavi, P. Parrilo and A. Willsky, Sparse and low-rank matrix decompositions, Annual Allerton Conference on Communication, Control, and Computing, 2009, 962-967. |
[8] | [ C. Chen, B. He, Y. Ye and X. Yuan, The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent, Mathematical Programming, 155(2016), 57-79. |
[9] | [ Y. Chen, H. Xu, C. Caramanis and S. Sanghavi, Robust matrix completion with corrupted columns, International Conference on Machine Learning, 2011, 873-880. |
[10] | [ B. Cheng, G. Liu, J. Wang, Z. Huang and S. Yan, Multi-task low-rank affinity pursuit for image segmentation, International Conference on Computer Vision, 2011, 2439-2446. |
[11] | [ A. Cichocki, R. Zdunek, A. H. Phan and S. Ichi Amari, Nonnegative Matrix and Tensor Factorizations:Applications to Exploratory Multi-way Data Analysis and Blind Source Separation, 1st edition, Wiley, 2009. |
[12] | [ Y. Cui, C.-H. Zheng and J. Yang, Identifying subspace gene clusters from microarray data using low-rank representation, PLoS One, 8(2013), e59377. |
[13] | [ P. Drineas, R. Kannan and M. Mahoney, Fast Monte Carlo algorithms for matrices Ⅱ:Computing a low rank approximation to a matrix, SIAM Journal on Computing, 36(2006), 158-183. |
[14] | [ E. Elhamifar and R. Vidal, Sparse subspace clustering, in IEEE International Conference on Computer Vision and Pattern Recognition, 2009, 2790-2797. |
[15] | [ E. Elhamifar and R. Vidal, Sparse subspace clustering:Algorithm, theory, and applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(2013), 2765-2781. |
[16] | [ P. Favaro, R. Vidal and A. Ravichandran, A closed form solution to robust subspace estimation and clustering, IEEE Conference on Computer Vision and Pattern Recognition, 2011, 1801-1807. |
[17] | [ J. Feng, Z. Lin, H. Xu and S. Yan, Robust subspace segmentation with block-diagonal prior, IEEE Conference on Computer Vision and Pattern Recognition, 2014, 3818-3825. |
[18] | [ M. Frank and P. Wolfe, An algorithm for quadratic programming, Naval Research Logistics Quarterly, 3(1956), 95-110. |
[19] | [ Y. Fu, J. Gao, D. Tien and Z. Lin, Tensor LRR based subspace clustering, International Joint Conference on Neural Networks, 2014, 1877-1884. |
[20] | [ A. Ganesh, Z. Lin, J. Wright, L. Wu, M. Chen and Y. Ma, Fast algorithms for recovering a corrupted low-rank matrix, International Workshop on Computational Advances in MultiSensor Adaptive Processing, 2009, 213-216. |
[21] | [ H. Gao, J.-F. Cai, Z. Shen and H. Zhao, Robust principal component analysis-based four-dimensional computed tomography, Physics in Medicine and Biology, 56(2011), 3181-3198. |
[22] | [ M. Grant and S. Boyd, CVX:Matlab software for disciplined convex programming (web page and software), http://cvxr.com/cvx/, 2009. |
[23] | [ S. Gu, L. Zhang, W. Zuo and X. Feng, Weighted nuclear norm minimization with application to image denoising, IEEE Conference on Computer Vision and Pattern Recognition, 2014, 2862-2869. |
[24] | [ H. Hu, Z. Lin, J. Feng and J. Zhou, Smooth representation clustering, IEEE Conference on Computer Vision and Pattern Recognition, 2014, 3834-3841. |
[25] | [ Y. Hu, D. Zhang, J. Ye, X. Li and X. He, Fast and accurate matrix completion via truncated nuclear norm regularization, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(2013), 2117-2130. |
[26] | [ M. Jaggi, Revisiting Frank-Wolfe:Projection-free sparse convex optimization, in International Conference on Machine Learning, 2013, 427-435. |
[27] | [ M. Jaggi and M. Sulovský, A simple algorithm for nuclear norm regularized problems, in International Conference on Machine Learning, 2010, 471-478. |
[28] | [ I. Jhuo, D. Liu, D. Lee and S. Chang, Robust visual domain adaptation with low-rank reconstruction, IEEE Conference on Computer Vision and Pattern Recognition, 2012, 2168-2175. |
[29] | [ H. Ji, C. Liu, Z. Shen and Y. Xu, Robust video denoising using low rank matrix completion, IEEE Conference on Computer Vision and Pattern Recognition, 2010, 1791-1798. |
[30] | [ Y. Jin, Q. Wu and L. Liu, Unsupervised upright orientation of man-made models, Graphical Models, 74(2012), 99-108. |
[31] | [ T. G. Kolda and B. W. Bader, Tensor decompositions and applications, SIAM Review, 51(2009), 455-500. |
[32] | [ C. Lang, G. Liu, J. Yu and S. Yan, Saliency detection by multitask sparsity pursuit, IEEE Transactions on Image Processing, 21(2012), 1327-1338. |
[33] | [ R. M. Larsen, http://sun.stanford.edu/~rmunk/PROPACK/, 2004. |
[34] | [ D. Lee and H. Seung, Learning the parts of objects by non-negative matrix factorization, Nature, 401(1999), 788. |
[35] | [ X. Liang, X. Ren, Z. Zhang and Y. Ma, Repairing sparse low-rank texture, in European Conference on Computer Vision, 7576(2012), 482-495. |
[36] | [ Z. Lin, R. Liu and H. Li, Linearized alternating direction method with parallel splitting and adaptive penality for separable convex programs in machine learning, Machine Learning, 99(2015), 287-325. |
[37] | [ Z. Lin, R. Liu and Z. Su, Linearized alternating direction method with adaptive penalty for low-rank representation, Advances in Neural Information Processing Systems, 2011, 612-620. |
[38] | [ G. Liu, Z. Lin, S. Yan, J. Sun and Y. Ma, Robust recovery of subspace structures by low-rank representation, IEEE Transactions Pattern Analysis and Machine Intelligence, 35(2013), 171-184. |
[39] | [ G. Liu, Z. Lin and Y. Yu, Robust subspace segmentation by low-rank representation, in International Conference on Machine Learning, 2010, 663-670. |
[40] | [ G. Liu, H. Xu and S. Yan, Exact subspace segmentation and outlier detection by low-rank representation, International Conference on Artificial Intelligence and Statistics, 2012, 703-711. |
[41] | [ G. Liu and S. Yan, Latent low-rank representation for subspace segmentation and feature extraction, in IEEE International Conference on Computer Vision, IEEE, 2011, 1615-1622. |
[42] | [ J. Liu, P. Musialski, P. Wonka and J. Ye, Tensor completion for estimating missing values in visual data, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(2013), 208-220. |
[43] | [ R. Liu, Z. Lin, Z. Su and J. Gao, Linear time principal component pursuit and its extensions using l1 filtering, Neurocomputing, 142(2014), 529-541. |
[44] | [ R. Liu, Z. Lin, F. Torre and Z. Su, Fixed-rank representation for unsupervised visual learning, IEEE Conference on Computer Vision and Pattern Recognition, 2012, 598-605. |
[45] | [ C. Lu, J. Feng, Z. Lin and S. Yan, Correlation adaptive subspace segmentation by trace lasso, International Conference on Computer Vision, 2013, 1345-1352. |
[46] | [ C. Lu, Z. Lin and S. Yan, Smoothed low rank and sparse matrix recovery by iteratively reweighted least squared minimization, IEEE Transactions on Image Processing, 24(2015), 646-654. |
[47] | [ C. Lu, H. Min, Z. Zhao, L. Zhu, D. Huang and S. Yan, Robust and efficient subspace segmentation via least squares regression, European Conference on Computer Vision, 7578(2012), 347-360. |
[48] | [ C. Lu, C. Zhu, C. Xu, S. Yan and Z. Lin, Generalized singular value thresholding, AAAI Conference on Artificial Intelligence, 2015, 1805-1811. |
[49] | [ X. Lu, Y. Wang and Y. Yuan, Graph-regularized low-rank representation for destriping of hyperspectral images, IEEE Transactions on Geoscience and Remote Sensing, 51(2013), 4009-4018. |
[50] | [ Y. Ma, S. Soatto, J. Kosecka and S. Sastry, An Invitation to 3-D Vision:From Images to Geometric Models, 1st edition, Springer, 2004. |
[51] | [ K. Min, Z. Zhang, J. Wright and Y. Ma, Decomposing background topics from keywords by principal component pursuit, in ACM International Conference on Information and Knowledge Management, 2010, 269-278. |
[52] | [ Y. Ming and Q. Ruan, Robust sparse bounding sphere for 3D face recognition, Image and Vision Computing, 30(2012), 524-534. |
[53] | [ L. Mukherjee, V. Singh, J. Xu and M. Collins, Analyzing the subspace structure of related images:Concurrent segmentation of image sets, European Conference on Computer Vision, 7575(2012), 128-142. |
[54] | [ Y. Nesterov, A method of solving a convex programming problem with convergence rate O(1/k2), (Russian) Dokl. Akad. Nauk SSSR, 269(1983), 543-547. |
[55] | [ Y. Panagakis and C. Kotropoulos, Automatic music tagging by low-rank representation, International Conference on Acoustics, Speech, and Signal Processing, 2012, 497-500. |
[56] | [ Y. Peng, A. Ganesh, J. Wright, W. Xu and Y. Ma, RASL:Robust alignment by sparse and low-rank decomposition for linearly correlated images, IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(2012), 2233-2246. |
[57] | [ J. Qian, J. Yang, F. Zhang and Z. Lin, Robust low-rank regularized regression for face recognition with occlusion, The Workshop of IEEE Conference on Computer Vision and Pattern Recognition, 2014, 21-26. |
[58] | [ X. Ren and Z. Lin, Linearized alternating direction method with adaptive penalty and warm starts for fast solving transform invariant low-rank textures, International Journal of Computer Vision, 104(2013), 1-14. |
[59] | [ A. P. Singh and G. J. Gordon, A unified view of matrix factorization models, in Proceedings of Machine Learning and Knowledge Discovery in Databases, 5212(2008), 358-373. |
[60] | [ H. Tan, J. Feng, G. Feng, W. Wang and Y. Zhang, Traffic volume data outlier recovery via tensor model, Mathematical Problems in Engineering, 2013(2013), 164810. |
[61] | [ M. Tso, Reduced-rank regression and canonical analysis, Journal of the Royal Statistical Society, Series B (Methodological), 43(1981), 183-189. |
[62] | [ R. Vidal, Y. Ma and S. Sastry, Generalized principal component analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(2005), 1945-1959. |
[63] | [ R. Vidal, Subspace clustering, IEEE Signal Processing Magazine, 28(2011), 52-68. |
[64] | [ J. Wang, V. Saligrama and D. Castanon, Structural similarity and distance in learning, Annual Allerton Conf. Communication, Control and Computing, 2011, 744-751. |
[65] | [ Y.-X. Wang and Y.-J. Zhang, Nonnegative matrix factorization:A comprehensive review, IEEE Transactions on Knowledge and Data Engineering, 25(2013), 1336-1353. |
[66] | [ S. Wei and Z. Lin, Analysis and improvement of low rank representation for subspace segmentation, arXiv:1107.1561. |
[67] | [ Z. Wen, W. Yin and Y. Zhang, Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm, Mathematical Programming Computation, 4(2012), 333-361. |
[68] | [ J. Wright, A. Ganesh, S. Rao, Y. Peng and Y. Ma, Robust principal component analysis:Exact recovery of corrupted low-rank matrices via convex optimization, Advances in Neural Information Processing Systems, 2009, 2080-2088. |
[69] | [ L. Wu, A. Ganesh, B. Shi, Y. Matsushita, Y. Wang and Y. Ma, Robust photometric stereo via low-rank matrix completion and recovery, Asian Conference on Computer Vision, 2010, 703-717. |
[70] | [ L. Yang, Y. Lin, Z. Lin and H. Zha, Low rank global geometric consistency for partial-duplicate image search, International Conference on Pattern Recognition, 2014, 3939-3944. |
[71] | [ M. Yin, J. Gao and Z. Lin, Laplacian regularized low-rank representation and its applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2016), 504-517. |
[72] | [ Y. Yu and D. Schuurmans, Rank/norm regularization with closed-form solutions:Application to subspace clustering, Uncertainty in Artificial Intelligence, 2011, 778-785. |
[73] | [ H. Zhang, Z. Lin and C. Zhang, A counterexample for the validity of using nuclear norm as a convex surrogate of rank, European Conference on Machine Learning, 8189(2013), 226-241. |
[74] | [ H. Zhang, Z. Lin, C. Zhang and E. Chang, Exact recoverability of robust PCA via outlier pursuit with tight recovery bounds, AAAI Conference on Artificial Intelligence, 2015, 3143-3149. |
[75] | [ H. Zhang, Z. Lin, C. Zhang and J. Gao, Robust latent low rank representation for subspace clustering, Neurocomputing, 145(2014), 369-373. |
[76] | [ H. Zhang, Z. Lin, C. Zhang and J. Gao, Relation among some low rank subspace recovery models, Neural Computation, 27(2015), 1915-1950. |
[77] | [ T. Zhang, B. Ghanem, S. Liu and N. Ahuja, Low-rank sparse learning for robust visual tracking, European Conference on Computer Vision, 7577(2012), 470-484. |
[78] | [ Z. Zhang, A. Ganesh, X. Liang and Y. Ma, TILT:Transform invariant low-rank textures, International Journal of Computer Vision, 99(2012), 1-24. |
[79] | [ Z. Zhang, X. Liang and Y. Ma, Unwrapping low-rank textures on generalized cylindrical surfaces, International Conference on Computer Vision, 2011, 1347-1354. |
[80] | [ Z. Zhang, Y. Matsushita and Y. Ma, Camera calibration with lens distortion from low-rank textures, IEEE Conference on Computer Vision and Pattern Recognition, 2011, 2321-2328. |
[81] | [ Y. Zheng, X. Zhang, S. Yang and L. Jiao, Low-rank representation with local constraint for graph construction, Neurocomputing, 122(2013), 398-405. |
[82] | [ X. Zhou, C. Yang, H. Zhao and W. Yu, Low-rank modeling and its applications in image analysis, ACM Computing Surveys, 47(2014), p36. |
[83] | [ G. Zhu, S. Yan and Y. Ma, Image tag refinement towards low-rank, content-tag prior and error sparsity, in International conference on Multimedia, 2010, 461-470. |
[84] | [ L. Zhuang, H. Gao, Z. Lin, Y. Ma, X. Zhang and N. Yu, Non-negative low rank and sparse graph for semi-supervised learning, IEEE International Conference on Computer Vision and Pattern Recognition, 2012, 2328-2335. |
[85] | [ W. Zuo and Z. Lin, A generalized accelerated proximal gradient approach for total-variation-based image restoration, IEEE Transactions on Image Processing, 20(2011), 2748-2759. |
1. | Xuya Liu, Shumei Wang, Shujun Fu, Li Yuliang, Shouyi Liu, Weifeng Zhou, An Efficient Collaborative Filtering Method for Image Noise and Artifact Removal, 2020, 8, 2169-3536, 124158, 10.1109/ACCESS.2020.3005024 | |
2. | Yunfang Fu, Qiuqi Ruan, Ziyan Luo, Yi Jin, Gaoyun An, Jun Wan, FERLrTc: 2D+3D facial expression recognition via low-rank tensor completion, 2019, 161, 01651684, 74, 10.1016/j.sigpro.2019.03.015 | |
3. | Sanyang Liu, Chong Zhang, 2018, Randomized Method for Robust Principal Component Analysis, 9781450365123, 1, 10.1145/3207677.3278070 | |
4. | Ting Ge, Ning Mu, Tianming Zhan, Zhi Chen, Wanrong Gao, Shanxiang Mu, Brain Lesion Segmentation Based on Joint Constraints of Low-Rank Representation and Sparse Representation, 2019, 2019, 1687-5265, 1, 10.1155/2019/9378014 | |
5. | Fengling Wang, Image Restoration Based on Gradual Reweighted Regularization and Low Rank prior, 2020, 2020, 1024-123X, 1, 10.1155/2020/9365405 | |
6. | Zhanxuan Hu, Feiping Nie, Rong Wang, Xuelong Li, Low Rank Regularization: A review, 2021, 136, 08936080, 218, 10.1016/j.neunet.2020.09.021 | |
7. | Regev Cohen, Yi Zhang, Oren Solomon, Daniel Toberman, Liran Taieb, Ruud JG van Sloun, Yonina C. Eldar, 2019, Deep Convolutional Robust PCA with Application to Ultrasound Imaging, 978-1-4799-8131-1, 3212, 10.1109/ICASSP.2019.8683030 | |
8. | Thierry Bouwmans, Sajid Javed, Hongyang Zhang, Zhouchen Lin, Ricardo Otazo, On the Applications of Robust PCA in Image and Video Processing, 2018, 106, 0018-9219, 1427, 10.1109/JPROC.2018.2853589 | |
9. | Rongrong Liu, Yassine Ruichek, Mohammed El Bagdouri, Multispectral background subtraction with deep learning, 2021, 80, 10473203, 103267, 10.1016/j.jvcir.2021.103267 | |
10. | Fan Li, Shaoquan Zhang, Bingkun Liang, Chengzhi Deng, Chenguang Xu, Shengqian Wang, Hyperspectral Sparse Unmixing With Spectral-Spatial Low-Rank Constraint, 2021, 14, 1939-1404, 6119, 10.1109/JSTARS.2021.3086631 | |
11. | Touseef Ahmad, Soumyendu Raha, Rosly B. Lyngdoh, Anand S. Sahadevan, Praveen K. Gupta, Arundhati Misra, Robust generalized bilinear model with weighted low-rank representation for hyperspectral image unmixing, 2022, 16, 1931-3195, 10.1117/1.JRS.16.024524 | |
12. | Dian Jin, Xin Bing, Yuqian Zhang, Unique Sparse Decomposition of Low Rank Matrices, 2023, 69, 0018-9448, 2452, 10.1109/TIT.2022.3223230 | |
13. | Thierry Bouwmans, Andrews Sobral, Sajid Javed, Soon Ki Jung, El-Hadi Zahzah, Decomposition into low-rank plus additive matrices for background/foreground separation: A review for a comparative evaluation with a large-scale dataset, 2017, 23, 15740137, 1, 10.1016/j.cosrev.2016.11.001 | |
14. | Tongtong Zhang, Yu Zhou, Yuanxiang Li, Xian Wei, SatensoRF: Fast Satellite Tensorial Radiance Field for Multidate Satellite Imagery of Large Size, 2024, 62, 0196-2892, 1, 10.1109/TGRS.2024.3382632 | |
15. | Junbiao Pang, Anjing Hu, Qingming Huang, Bundle fragments into a whole: Mining more complete clusters via submodular selection of interesting webpages for web topic detection, 2024, 09574174, 125125, 10.1016/j.eswa.2024.125125 |