An evolutionary multiobjective method for low-rank and sparse matrix decomposition

  • Published: 01 January 2017
  • 90C29, 92D15

  • This paper addresses the problem of finding the low-rank and sparse components of a given matrix. The problem involves two conflicting objective functions, reducing the rank and sparsity of each part simultaneously. Previous methods combine two objectives into a single objective penalty function to solve with traditional numerical optimization approaches. The main contribution of this paper is to put forward a multiobjective method to decompose the given matrix into low-rank component and sparse part. We optimize two objective functions with an evolutionary multiobjective algorithm MOEA/D. Another contribution of this paper, a modified low-rank and sparse matrix model is proposed, which simplifying the variable of objective functions and improving the efficiency of multiobjective optimization. The proposed method obtains a set of solutions with different trade-off between low-rank and sparse objectives, and decision makers can choose one or more satisfied decomposed results according to different requirements directly. Experiments conducted on artificial datasets and nature images, show that the proposed method always obtains satisfied results, and the convergence, stability and robustness of the proposed method is acceptable.

    Citation: Tao Wu, Yu Lei, Jiao Shi, Maoguo Gong. 2017: An evolutionary multiobjective method for low-rank and sparse matrix decomposition, Big Data and Information Analytics, 2(1): 23-37. doi: 10.3934/bdia.2017006

    Related Papers:

    [1] Zhouchen Lin . A Review on Low-Rank Models in Data Analysis. Big Data and Information Analytics, 2016, 1(2): 139-161. doi: 10.3934/bdia.2016001
    [2] Ioannis D. Schizas, Vasileios Maroulas, Guohua Ren . Regularized kernel matrix decomposition for thermal video multi-object detection and tracking. Big Data and Information Analytics, 2018, 3(2): 1-23. doi: 10.3934/bdia.2018004
    [3] Ming Yang, Dunren Che, Wen Liu, Zhao Kang, Chong Peng, Mingqing Xiao, Qiang Cheng . On identifiability of 3-tensors of multilinear rank (1; Lr; Lr). Big Data and Information Analytics, 2016, 1(4): 391-401. doi: 10.3934/bdia.2016017
    [4] Jinyuan Zhang, Aimin Zhou, Guixu Zhang, Hu Zhang . A clustering based mate selection for evolutionary optimization. Big Data and Information Analytics, 2017, 2(1): 77-85. doi: 10.3934/bdia.2017010
    [5] John A. Doucette, Robin Cohen . A testbed to enable comparisons between competing approaches for computational social choice. Big Data and Information Analytics, 2016, 1(4): 309-340. doi: 10.3934/bdia.2016013
    [6] Yaguang Huangfu, Guanqing Liang, Jiannong Cao . MatrixMap: Programming abstraction and implementation of matrix computation for big data analytics. Big Data and Information Analytics, 2016, 1(4): 349-376. doi: 10.3934/bdia.2016015
    [7] Xiaoxiao Yuan, Jing Liu, Xingxing Hao . A moving block sequence-based evolutionary algorithm for resource investment project scheduling problems. Big Data and Information Analytics, 2017, 2(1): 39-58. doi: 10.3934/bdia.2017007
    [8] Cun Chen, Hui Peng . Dynamic mode decomposition for blindly separating mixed signals and decrypting encrypted images. Big Data and Information Analytics, 2024, 8(0): 1-25. doi: 10.3934/bdia.2024001
    [9] Weidong Bao, Haoran Ji, Xiaomin Zhu, Ji Wang, Wenhua Xiao, Jianhong Wu . ACO-based solution for computation offloading in mobile cloud computing. Big Data and Information Analytics, 2016, 1(1): 1-13. doi: 10.3934/bdia.2016.1.1
    [10] Dongyang Yang, Wei Xu . Statistical modeling on human microbiome sequencing data. Big Data and Information Analytics, 2019, 4(1): 1-12. doi: 10.3934/bdia.2019001
  • This paper addresses the problem of finding the low-rank and sparse components of a given matrix. The problem involves two conflicting objective functions, reducing the rank and sparsity of each part simultaneously. Previous methods combine two objectives into a single objective penalty function to solve with traditional numerical optimization approaches. The main contribution of this paper is to put forward a multiobjective method to decompose the given matrix into low-rank component and sparse part. We optimize two objective functions with an evolutionary multiobjective algorithm MOEA/D. Another contribution of this paper, a modified low-rank and sparse matrix model is proposed, which simplifying the variable of objective functions and improving the efficiency of multiobjective optimization. The proposed method obtains a set of solutions with different trade-off between low-rank and sparse objectives, and decision makers can choose one or more satisfied decomposed results according to different requirements directly. Experiments conducted on artificial datasets and nature images, show that the proposed method always obtains satisfied results, and the convergence, stability and robustness of the proposed method is acceptable.



    1. Introduction

    Matrix decomposition can catch some characteristic information of matrix, in which low-rank and sparse components are two of the most interesting. In the past decades, the theory of matrix decomposition has been developed rapidly, especially low-rank or sparse matrix decomposition have been used for many popular research areas, such as machine learning, image/signal processing and network analysis etc[19,22,24,20]. The problem of low-rank and sparse matrix decomposition (LRSMD) is highlighted and intensively studied recently in [4,6], especially in robust principle component analysis (PCA).

    As described in the literature [4], we present a matrix DRm×n with mn, and know that it can be decomposed as

    D=L0+S0, (1)

    where L0Rm×n has low rank and S0Rm×n is sparse. We need to get the low-rank matrix L0 and sparse matrix S0 without prior knowledge about rank information and sparse pattern, and let both the rank of L0 and the sparsity of S0 as smaller as possible. We can model the LRSMD problem as a two objectives optimization problem:

    minimize (rank(L0),card(S0))subject to L0+S0=D. (2)

    It is known that the LRSMD problem is in general ill-posed and NP-hard [4], and the easiest way is finding a convex optimization problem to approximate original problem. The heuristics of using the l1-norm as the proxy of sparsity and the nuclear norm as the surrogate of low-rank are widely used in many areas (see e.g. [5,17,10]). The authors of [4] proposed the model that converting the NP-hard problem into convex optimization problem:

    minimize L+λS1subject to L+S=D, (3)

    where L denotes the nuclear norm of the matrix L, that is, the sum of the singular value of L, and S1 denotes the l1-norm of S seen as a long vector in Rm×n.

    Nowadays, many methods based on different strategies have been proposed. Iterative thresholding (IT) algorithms [1,2] regularize the problem (3) and construct a Lagrange function, and then two objective variables are updated alternately. The iterative form of IT algorithms is simple and convergent, but the rate of convergence is slow and it is difficult to select the appropriate length of step. Accelerated proximal gradient (APG) algorithm [12] is another iterative approach, which building partial quadratic approximation by using Lipschitz gradient to solve the Lagrange function, iterating and getting solution. Although it is similar as IT algorithms, it reduces the number of iterations greatly. The dual method [12] is proposed because the nuclear norm is dual to spectral norm, the dual problem of problem (3) is easier to solve. Compared with APG algorithm, it has better scalability because it do not need complete singular decomposition in each iteration. One of the most influential method is augmented Lagrange multipliers (ALM) method [13]. The ALM method constructs augmented Lagrange function firstly, and then using alternating iteration method to optimize the problem. It is called EALM when using an exact augmented Lagrange multipliers while it is named IALM with inexact multipliers. IALM is also known as alternating direction methods (ADM)[21]. ALM method is more efficient than APG method, and can achieve higher accuracy with lower storage space.

    For low-rank and sparse matrix decomposition problem, most of methods consider the ratio of low-rank degree and sparsity as a certain parameter, however different given matrices or requirements are always with different parameter λ. There are some related works that use multiobjective evolutionary algorithms to directly optimize two objectives of some machine learning problems [15,16]. In this paper, we model low-rank and sparse matrix decomposition as a multiobjective problem (MOP), and modify the multiobjective LRSMD (MOLRSMD), which has simple structure and is easy to optimize. In MOLRSMD, on the one hand, the measure of low-rank degree and sparsity are still used as the two objective functions, which ensure the results of decomposition are feasible. On the other hand, the optimized efficiency is improved by modifying the optimization function. We use the evolutionary multiobjective algorithms to optimize the MOLRSMD which obtain trade-off solutions in various situations. The decision maker only need to select the solution from Pareto optimal set, and it is easy to select because the quality of solutions in Pareto optimal set can be judged directly.

    The remainder of this paper is organized as follows. Section 2 introduces the related background and motivation about the proposed method. In Section 3, the proposed model and algorithm are presented in detail. Experimental study is described in Section 4. Finally, concluding remarks are given in Section 5.


    2. Related background and motivation

    In this section, in order to make it easier to understand the proposed method, we will give a brief introduction to multiobjective optimization. Then we will describe the motivation of multiobjective low-rank and sparse matrix decomposition.


    2.1. Multiobjective optimization

    In general, it will be called multiobjective optimization (MOO), when the number of objective function is more than one and need to optimize simultaneously [8,7,14]. A MOP can be stated as:

    min F(x)=(f1(x),,fm(x))Tsubject to xΩ, (4)

    where Ω is the decision(variable) space, F:ΩRm consists of m real-value objective functions and Rm is called the objective space. The objectives are conflicting with each other in the most of time, it means no solution in Ω minimizes all the objectives simultaneously. The best trade-off among the objectives can be defined in terms of Pareto optimality.

    Let xA, xBΩ, xA is said to dominate xB if and only if

    i=1,2,,m  fi(xA)fi(xB)j=1,2,,m  fj(xA)<fj(xB), (5)

    and mark it as xAxB. A solution xΩ is Pareto optimal if there is no solution xΩ such that xx. The set of all the Pareto optimal solutions is called the Pareto set and the set of all the Pareto optimal objective vectors which corresponding to all Pareto optimal solution is the Pareto front. In order to illustrate the concepts of nondominated, dominated and Pareto front, we take a two objective optimization as example in Figure 1.

    Figure 1. The distributions of nondominated solutions, dominated solutions and Pareto front in the objective space of the two objectives problem..

    In practical applications, most objective functions have many or even infinite solutions, and we can not get all of them [23]. The purpose of multiobjective optimization is that finding a uniformly distributed Pareto front under a certain amount, which can represent the whole Pareto front. Evolutionary algorithm is one of the most popular methods for solving multiobjective optimization [11,18,25,9,23], such as VEGA [18], SPEA-Ⅱ [25], NSGA-Ⅱ [9] and MOEA/D [23]. Zhang and Li [23] proposed the MOEA/D which has a good performance in MOPs. MOEA/D decomposes the MOPs into a series scalar optimization subproblems with different weights. Due to the solution of each subproblem is similar to neighboring subproblems, the solution can be optimized by neighboring subproblems information. In this paper, due to the problem can be decomposed naturally with different parameter λ, we use MOEA/D to handling multiobjective low-rank and sparse matrix decomposition.


    2.2. Motivation

    For matrix decomposition with constraints of rank and sparsity, as described in problem 2, most of methods convert NP-hard problem into convex optimization. However these algorithms have to set a value of parameter λ for trade-off between reducing the rank of low-rank component and making sparse part sparse as far as possible. Although [4] suggests λ=1/max(m,n), it is clear that different choices for λ in problem (3) will yield different optimal solutions and it is not easy to know the optimal λ, which will result in best decomposition of the given matrix. Figure 2 show a simple experiment to demonstrate how the parameter λ affects the result of sparse component recovering with ADM method. In Figure 2, the horizontal axis represents different possible choices of parameter λ, and the vertical axis means the error of recovering sparse matrix with different λ. We know that the best solution with the smallest error while parameter λ is about 0.22, and we can not recover a satisfied sparse component when λ is too small. The experiment emphasizes how important to choose a suitable parameter λ.

    Figure 2. Error of sparse component recovering by the ADM method for LRSMD with different choices of the parameter λ.

    Multiobjective optimization is a popular way to avoiding the difficulty of parameter λ selection. Compared with traditional single objective penalty methods, multiobjective optimization takes apparent advantages on optimizing multi-objectives simultaneously and selecting solutions easily[3,23]. Especially in solutions selection, MOO provides a set of trade-off solutions for decision maker while single objective methods have to try different parameters to find suitable solutions. In the most of time, single objective methods can not make a best decision without prior knowledge, even sometimes with prior knowledge, it is also.

    For LRSMD, we focus on a more original problem, which thinks LRSMD as a multiobjective problem. The essence of the MOO is to optimize several conflicting objectives simultaneously. As described in problem (2), there are two existing conflicting objectives, which represent low-rank degree and sparsity respectively. We can obtain different solutions by a evolutionary multiobjective algorithm, such as MOEA/D. And each solution in the Pareto front represents a trade-off between reducing rank of low-rank component and decreasing the sparsity of sparse part, which can satisfy different requirements of decision makers.


    3. The proposed method

    First of all, we introduce the model of multiobjective low-rank and sparse matrix decomposition based on matrix singular value decomposition. Then details of multiobjective low-rank and sparse matrix decomposition algorithm will be described, which show how the MOO approach is applied in matrix decomposition.


    3.1. Modified multiobjective Low-rank and sparse matrix decomposition

    As is mentioned in Section 1, problem (2) is a standard multiobjective problem, however it is hard to optimize even with evolutionary algorithm. The reasons can summary as follow: 1) both two objective functions are NP-hard and the low-rank component and sparse component are tightly correlative. 2) the decision variable of problem (2) is complex whatever it is low-rank component L0 or sparse part S0, it is difficult to obtain the optimal solution even if evolutionary algorithms are used, especially for the recovery of the low-rank component. A new multiobjective low-rank and sparse matrix decomposition model is modified by problem (2) and based on matrix singular value decomposition.

    Let the singular value vector x as decision variable, f1 represents the rank degree of low rank component, it is defined as

    f1(x)=ni=1|xi|. (6)

    Similarly, sparsity of sparse component is defined as

    f2(x)=mi=1nj=1|Sij(x)|, (7)

    where xRn, and S(x)=DUΣ(x)VT, DRm×n is the given matrix and Σ(x) is the diagonal matrix consisting of x. Both URm×m and VRn×n are orthogonal matrix and they are fixed in the proposed method.

    At this point, the two objective functions can be combined into a MOP. It has the following form:

    minxF(x)=minx(f1,f2)subject to xΩ. (8)

    For low rank component, it is hard to measure and optimize, but the singular value vector can approximate it. When U and V is suitable and fixed, every different low-rank matrix have only one singular value vector from the SVD theorem. The difficulty of sparse optimization is lower than low-rank, meanwhile, the complexity of sparse component as variable is harder than singular value vector. Based on the above mentioned, set the singular value vector as the variable of MOLRSMD is recommendable. The process of multiobjective LRSMD model is summarized in Algorithm 1.

    Algorithm 1 Multiobjective LRSMD Model
    Input: Dm×n: matrix data.
    Output: L, S: set of all possible decomposed results.
    1: Step 1) [U,Σ,V]svd(D)
    2: Step 2) Generate a set of nondominated solutions (X) for problem (8) using Algorithm 2.
    3: Step 3) Recovery L and S.
    4: for i=1,2,...,N do
    5:   Step 3.1) ΣiX(i)
    6:   Step 3.2) LiUΣiVT
    7:   Step 3.2) SiDLi
    8: end for

    3.2. Procedure of MOLRSMD based on MOEA/D

    In this paper, the framework of MOEA/D [23] is applied in MOLRSMD because of the characteristic of MOEA/D which decomposing the MOP into several scalar optimization subproblems. In MOLRSMD we use Tchebycheff approach to decompose the multiobjective optimization problem (8), and the scalar optimization subproblem is in the form

    min gte(x|w)=max{w1|f1z1|,w2|f2z2|}subject to xΩ, (9)

    where w=(w1,w2)T is a weight vector with w1,w2>0 and w1+w2=1, and z=(z1,z2)T is the reference point, i.e., z1=min{f1(x)|xΩ}. We can obtain a set of subproblems like (9) by selecting different weight vector w. Each subproblem with different weight vector represents a certain trade-off between two objectives.

    The procedure of the proposed algorithm for low-rank and sparse matrix decomposition is given in Algorithm 2. The MOLRSMD problem can be decomposed into N scalar optimization subproblems by (9). Therefore we defined a series of weight vectors w1,w2,,wN with uniform spread and calculate the Euclidean distance between any two weight vectors, and then work out the T closest weight vectors to each weight vector. The neighborhood of each subproblem consists of all the subproblems with T closest weight vectors. The initial population of N individuals x1,x2,,xNΩ is randomly generated, where xi is singular value vector of i-th low-rank component. During iteration, some specific genetic operators and problem specific heuristic are used to generate and improve new individuals. The neighboring solutions are updated by gte(x|w) with each subproblem. Finally, we get a set of solutions with different trade-off relationship between the low-rank degree and sparsity of each component.

    Algorithm 2 Procedure of MOLRSMD based on MOEA/D
    Input: stopping criterion: tmax is the maximum number of generations; N: the number of subproblems; T: the size of neighborhood; pc: the probability of crossover; pd: the differential multiplier; pm: the the probability of mutation; ps: the probability of mutation strategies selection.
    Output: Pareto front solution; matrix decomposition results.
    1: Step 1) Initialization
    2: Step 1.1) Generate a uniform distributed of N weight vectors w1,w2,...,wN and computer the Euclidean distances between any two weight vectors and then work out the T closest weight vectors to each weight vector. For each i=1,...,N, set B(i)={i1,...,iT}, where wi1,...,wiT are the T closest weight vector to wi.
    3: Step 1.2) Generate an initial population x1,x2,...,xN randomly.
    4: Step 1.3) Initialize two reference points zl=(zl1,zl2)T and zu=(zu1,zu2)T represent the best and the worst values of each objective function in objective space respectively.
    5: Step 1.4) For each i=1,...,N, calculate f1, f2 and gte(xj|wj,zl,zu).
    6: Step 2) Set iter=0.
    7: Step 3) Update
    8: for i=1,2,...,N, do
    9:   Step 3.1) Randomly select two indexes k, l from B(i), and then generate a new individual solution y from xi, xk and xl by using genetic operator with pc, pd, pm and ps.
    10:  Step 3.2) Apply a thresholding method to improve y.
    11:  Step 3.3) For each j=1,2, if zlj>fj(y), then set zlj=fj(y); and if zuj<fj(y), then set zuj=fj(y).
    12:  Step 3.4) For each index jB(i), calculate gte(y|wj,zl,zu), if gte(y|wj,zl,zu)gte(xj|wj,zl,zu), then set xj=y and gte(xj|wj,zl,zu)=gte(y|wj,zl,zu).
    13: end for
    14: Step 4) Stopping criteria: If iter<tmax, then iter=iter+1 and go to Step 3, otherwise, stop the algorithm and output.

    In this algorithm, some ideas of Tchebycheff decomposition and genetic operators are used which make solutions distribute more uniform and algorithm convergence faster. Next, some details about these ideas will be introduced.


    3.2.1. Objective function normalization

    In this paper, due to the difference of orders of magnitude between two objective functions value, the strategy of objective functions normalization is employed. In general, the value of objective function is large especially the range of two objectives have a huge difference, which make Pareto front is not uniform and most of points cluster in corner. A simple function normalization method formed as follow

    ¯fi=fizlizuizli (10)

    where i=1,2, zl=(zl1,zl2)T is the lowest point and zu=(zu1,zu2)T is the highest point in objective space. For scalar optimization subproblems, the objective function of Tchebycheff decomposition is

    ¯gws(x|w,zl,zu)=max{w1|f1z1|,w2|f2z2|}=max{w1|¯f1¯z1|,w2|¯f2¯z2|}=max{w1|f1zl1zu1zl1|,w2|f2zl2zu2zl2|}, (11)

    3.2.2. Genetic operator

    Genetic operator is the key of evolutionary algorithm, with genetic operator combined characteristic of optimization problem, the convergence speed of algorithm can be improved. In this paper, we use a difference strategy and double local mutation method to speed up algorithm.

    In crossover operator, difference optimization strategy is applied. We select two individuals randomly from the neighborhood of xi, noted as xk and xl, and new individual can generate by

    yj={xij+F(xkjxlj) if r<pcxij otherwise (12)

    where xj, yj mean the j-th value of x and y and j=1,2,...,N.

    We apply a scale controllable mutation with two different mutations to balance diversity and precision of population. In the paper, we select Gauss mutation and polynomial mutation. Gauss mutation is simple than polynomial mutation and well done in global searching. Polynomial mutation for local search can improve the quality of solution. Therefore, if a random number r<pm, do

    y={Gauss(y,σ)if rand<psPolynomial(y)otherwise (13)

    where Gauss(y,σ) is a Gauss function with y,σ, and Polynomial(y) is a polynomial function with variable y.

    In this section, setting singular value vector as decision variable, makes problem simpler and easier to optimize. On the one hand, the complexity of variable is O(n) while traditional method with O(mn). On the other hand, there are few SVD process in multiobjective LRSMD model while most traditional penalty methods need more SVD process, and the most of time cost in iteration spend on SVD process. Furthermore, multiobjective LRSMD model can obtain a set of trade-off solutions which can be chosen directly by decision makers with requirements.


    4. Experimental study

    In this section, we will represent experiments on artificial generated datasets and some nature images to evaluate the performance of the proposed method. The experiment focus on how the multiobjective low-rank and sparse matrix decomposition algorithm works with different type test data, and analyzing the performance with the convergence, stability and robustness.


    4.1. Experimental datasets and experimental setting

    Let D=L+S be the available data, where L and S are, the original low-rank and sparse matrices, respectively, that need to be recovered. The datasets of experiment are generated with different combinations of the rank of L and the sparsity of S. In addition, the size of original matrix is also different in experiment and all matrices are generated by Gauss random number. We also have experiments on datasets with noise to evaluate the robustness of the proposed method. The experimental datasets are not only artificial generated datasets as described above but also some nature images. The nature images are consisted of Lena and some images from orl-faces datasets.

    In order to illustrate the result of experiment intuitively, we define some indexes to measure results and the performance of the proposed method. The relative error to the original low-rank matrix L is defined by

    ErrLR:=||LL||F||L||F. (14)

    Similarly, the relative error to the original sparse matrix S is defined by

    ErrSP:=||SS||F||S||F. (15)

    The total error to original matrix is

    ErrT:=||L+SD||F||D||F. (16)

    Due to the characteristic of multiobjective matrix decomposition method, there are many different decomposition results and some results are widely different with original matrices. On the one hand, in order to analysis the accuracy of matrix decomposition in artificial datasets, we just focus on solution with smallest recovery error and the distribute of Pareto front. On the other hand, we could select solution with actual demand when all decomposed solutions would be represented with images in nature images.

    The parameters in the proposed method are given in Table 1.

    Table 1. The Parameters in the MOLRSMD used in the Experiments.
    ParameterMeaningValue
    NThe number of subproblems100
    TThe number of neighbors20
    tmaxThe maximum of generations300
    pcThe probability of crossover0.8
    pdThe differential multiplier0.5
    pmThe probability of mutation0.2
    psThe probability of mutation selection0.85
     | Show Table
    DownLoad: CSV

    4.2. Experimental results


    4.2.1. Artificial datasets

    For artificial generated datasets, we select three different size test matrices, which the sizes are 20×20, 50×50 and 100×100, respectively. In addition, the rank and sparsity are also different. From Figure 3, it illustrates the results with three different sizes and represents MOLRSMD work well in this datasets. The first column is three Pareto front for MOLRSMD in different data, which the biggest different is matrix size. The Pareto front is with approximate uniform distribution for all of three different data, which means MOLRSMD has a good convergence. Each figure in the second column of Figure 3 is two objection functions corresponding to the solutions. The blue points represent low-rank function values while the red points represent sparse function value. The value of low-rank function is always in a small range compared with sparse function. The last column is three box-plot about ErrLR and ErrSP by running 30 times independent experiments. The box-plot shows the statistical results of the change of ErrLR and ErrSP. From box-plot we can see that low-rank recovering error is lower than sparse and do not change appreciably with different times running. It means the proposed method has a great stability.

    Figure 3. The Pareto front, two objective functions corresponding to the solutions and box-plot for ErrLR and ErrSP by running 30 times independent experiments. Three rows represent experiment on the data with size:20×20, rank: 5, sparsity: 0.2, size: 50×50, rank: 5, sparsity: 0.2 and size: 100×100, rank: 5, sparsity: 0.5 respectively..

    Furthermore, we also do experiments on some data with noise to test the robustness of the proposed method. For data with size: 20×20, rank: 1 and sparsity: 0.1, we add Gauss noise randomly. The types of noise determined by two factors and can be described as follow: number of noise points can choose from 20 and 50; σ of Gauss noise: 1, 5 and 15. We randomly combine two factors of noise and test all of them except 50 points, σ=1.

    Figure 4 displays the box-plot of recovering error in five noisy situations, which is described as above. We can know that not only the error of low-rank recovering but also sparse recovering is changed in a small range with different types noise. With the improved of number of noise points or σ of Gauss function, the changed range of error is improved. However, the range of difference between different noise is reasonable. In a word, the robustness of the proposed method is acceptable. Figure 5 shows that the Pareto front is also smooth and uniform distribution with the worst noise, which number of noise points is 50 and σ of Gauss function is 15.

    Figure 4. Box-plot about ErrLR and ErrSP for data with different noise. (a): number of noise points is 20, σ=1. (b): number of noise points is 20, σ=5. (c): number of noise points is 20, σ=15. (d): number of noise points is 50, σ=5. (e): number of noise points is 50, σ=15..
    Figure 5. The Pareto front and two objective functions corresponding to the solutions for noise data. Noise type: number of noise points is 50 and σ=15..

    4.2.2. Nature Datasets

    In the nature images, the proposed method also works well. The images that we used is consisted of Lena and orl-faces datasets. Due to no standard decomposed results of datasets, we just analysis the Pareto front and practical results to evaluate performance of the proposed method. Some results are displayed in Figure 6 and Figure 7. Figure 6 is about image Lena and Figure 7 is result in one random face image from orl-faces. We can see that the Pareto front of the proposed method is smooth and equally distributed, which means the method converges to PF stably. The (b) of Figure 6 and 7 show three different decomposed results with different locations in the PF. The first row of part (b) means sparse component has an absolute advantages, on the contrary, the third row shows low-rank part with more advantage. The middle one is a balance between low-rank and sparse components.

    Figure 6. The Pareto front and decomposed results with image lena. (a) is Pareto front of MOLRSMD and (b) is three different decomposed results selected form Pareto front, and they locate at the top, middle and bottom of Pareto front, respectively..
    Figure 7. The Pareto front and decomposed results with image face. (a) is Pareto front of MOLRSMD and (b) is three different decomposed results selected form Pareto front, and they locate at the top, middle and bottom of Pareto front, respectively..

    4.2.3. Comparison Study

    The performance of the proposed MOLRSMD have been examined in the previous subsection. In this subsection, simple comparison with some advanced LRSMD algorithms is tested on artificial datasets. ADM [21] and ALM [13] are two mainstream and efficient algorithms in exiting research. Both of them need set parameter λ as prior information which balances low-rank and sparse components. For all the algorithms, the parameter λ is varied from 0 to 1, with a step size of 0.01 in order to compare with MOLRSMD fairly. The results of the proposed MOLRSMD as compared with ADM and ALM are presented in Figure 8 and Table. 2.

    Figure 8. Results of the proposed MOLRSMD compared with ADM and ALM. The test dataset size is 50×50, rank equals 5 and sparsity is 0.2..
    Table 2. Errors of the proposed MOLRSMD compared with ADM and ALM..
    MethodsErrLR ErrSP ErrT
    MOLRSMD0.01000.19860
    ADM0.02990.13311.9135e-17
    ALM0.01480.06612.2412e-10
     | Show Table
    DownLoad: CSV

    Each row of Figure 8 presents the Pareto front by different algorithms. The Pareto front of MOLRSMD is complete and smooth as compared with ADM and ALM. The results of ADM and ALM are with uneven distribution, most of Pareto optimal solutions cluster in low sparsity region even the region with sparsity close to zero. In particular, from the second row of Figure 8, solutions of ADM and ALM with a sudden change and most of solutions cluster in two regions (the value of two objective functions are about 7500,2500 and 4500, 0 respectively) while solutions of the proposed MOLRSMD with uniform and smooth distribution. Table 2 represents the smallest recovery error from 100 solutions of the proposed MOLRSMD, ADM and ALM on artificial datasets (size: 50× 50, rank: 5, sparsity: 0.2). We can see that errors of three algorithms are similar. We have a discovery from above all, the influence of parameter λ is nonlinear and hard to described. Sometimes it is hard to find a suitable parameter λ. For example, if the solution in region with low rank and high sparsity in this case, the difficulty of find suitable λ is very hard even impossible. In contrary, the proposed MOLRSMD avoids the selection of parameter λ. It is easier to find optimal solution compared with ADM and ALM.


    5. Conclusion

    In this paper, an evolutionary multiobjective low-rank and sparse matrix decomposition method has been proposed, for finding optimal trade-off solutions between the rank of low-rank component and the sparsity of sparse part. We focus on more original problem of low-rank and sparse matrix decomposition, with the theorem of matrix singular value decomposition, modifying the model by replacing decision variable with singular value vector. We have experiments on different type datasets, including artificial datasets, datasets with noise and nature images, to indicate that the proposed MOLRSMD is always with good convergence, reliable stability and acceptable robustness. It also show that it can be applied to some practical problem.

    Future work will be attention on decreasing the accuracy of recovering. In the proposed method, the orthogonal matrices U and V is fixed, however different matrix with different orthogonal matrices in the most of time. Fixed orthogonal matrices is always increasing the error. Next, we will explore how to find the best U and V while singular value vector changed.


    [1] Beck A., Teboulle M. (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2: 183-202. doi: 10.1137/080716542
    [2] Cai J.-F., Candès E. J., Shen Z. (2010) A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization 20: 1956-1982. doi: 10.1137/080738970
    [3] Cai Z., Wang Y. (2006) A multiobjective optimization-based evolutionary algorithm for constrained optimization. IEEE Transactions on Evolutionary Computation 10: 658-675. doi: 10.1109/TEVC.2006.872344
    [4] E. J. Candès, X. Li, Y. Ma and J. Wright, Robust principal component analysis? Journal of the ACM (JACM), 58 (2011), Art. 11, 37 pp.

    10.1145/1970392.1970395

    MR2811000

    [5] Candès E. J., Recht B. (2009) Exact matrix completion via convex optimization. Foundations of Computational Mathematics 9: 717-772. doi: 10.1007/s10208-009-9045-5
    [6] Chandrasekaran V., Sanghavi S., Parrilo P. A., Willsky A. S. (2011) Rank-sparsity incoherence for matrix decomposition. SIAM Journal on Optimization 21: 572-596. doi: 10.1137/090761793
    [7] C. A. C. Coello, D. A. Van~Veldhuizen and G. B. Lamont, Evolutionary Algorithms for Solving Multi-Objective Problems, Genetic Algorithms and Evolutionary Computation, 5. Kluwer Academic/Plenum Publishers, New York, 2002.

    10.1007/978-1-4757-5184-0

    MR2011496

    [8] K. Deb, Multi-objective Optimization Using Evolutionary Algorithms, John Wiley & Sons, Ltd. , Chichester, 2001.

    MR1840619

    [9] Deb K., Pratap A., Agarwal S., Meyarivan T. (2002) A fast and elitist multiobjective genetic algorithm: NSGA-Ⅱ. IEEE Transactions on Evolutionary Computation 6: 182-197. doi: 10.1109/4235.996017
    [10] Fazel M., Hindi H., Boyd S. P. (2001) A rank minimization heuristic with application to minimum order system approximation. in Proceedings of the American Control Conference, IEEE 6: 4734-4739. doi: 10.1109/ACC.2001.945730
    [11] Gong M., Jiao L., Du H., Bo L. (2008) Multiobjective immune algorithm with nondominated neighbor-based selection. Evolutionary Computation 16: 225-255. doi: 10.1162/evco.2008.16.2.225
    [12] Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen and Y. Ma, Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix, Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), vol. 61,2009.
    [13] Z. Lin, M. Chen and Y. Ma, The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices, arXiv preprint, arXiv: 1009.5055, 2010.
    [14] K. Miettinen, Nonlinear Multiobjective Optimization, Kluwer Academic Publishers, Boston, MA, 1999.

    MR1784937

    [15] Qian C., Yu Y., Zhou Z.-H. (2015) Pareto ensemble pruning. in AAAI 2935-2941.
    [16] ————, Subset selection by pareto optimization, in Advances in Neural Information Processing Systems, (2015), 1774-1782.
    [17] Recht B., Fazel M., Parrilo P. A. (2010) Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review 52: 471-501. doi: 10.1137/070697835
    [18] J. D. Schaffer, Multiple objective optimization with vector evaluated genetic algorithms, in Proceedings of the 1st international Conference on Genetic Algorithms. L. Erlbaum Associates Inc. , (1985), 93-100.
    [19] Starck J. L., Elad M., Donoho D. L. (2005) Image decomposition via the combination of sparse representations and a variational approach. IEEE Transactions on Image Processing 14: 1570-1582. doi: 10.1109/TIP.2005.852206
    [20] Yan J., Liu J., Li Y., Niu Z., Liu Y. (2010) Visual saliency detection via rank-sparsity decomposition. in IEEE International Conference on Image Processing, IEEE 1089-1092. doi: 10.1109/ICIP.2010.5652280
    [21] Yuan X., Yang J. (2013) Sparse and low-rank matrix decomposition via alternating direction methods. Pacific Journal of Optimization 9: 167-180.
    [22] Zhang C., Liu J., Tian Q., Xu C., Lu H., Ma S. (2011) Image classification by non-negative sparse coding, low-rank and sparse decomposition. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 1673-1680. doi: 10.1109/CVPR.2011.5995484
    [23] Zhang Q., Li H. (2007) MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Transactions on Evolutionary Computation 11: 712-731.
    [24] Zibulevsky M., Pearlmutter B. A. (2001) Blind source separation by sparse decomposition in a signal dictionary. Neural Computation 13: 863-882.
    [25] Zitzler E., Laumanns M., Thiele L., et al. (2001) SPEA2: Improving the strength pareto evolutionary algorithm. in Eurogen 3242: 95-100.
  • Reader Comments
  • © 2017 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3481) PDF downloads(646) Cited by(0)

Article outline

Figures and Tables

Figures(8)  /  Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog