
Cross-view data correlation analysis is a typical learning paradigm in machine learning and pattern recognition. To associate data from different views, many approaches to correlation learning have been proposed, among which canonical correlation analysis (CCA) is a representative. When data is associated with label information, CCA can be extended to a supervised version by embedding the supervision information. Although most variants of CCA have achieved good performance, nearly all of their objective functions are nonconvex, implying that their optimal solutions are difficult to obtain. More seriously, the discriminative scatters and manifold structures are not exploited simultaneously. To overcome these shortcomings, in this paper we construct a Discriminative Correlation Learning with Manifold Preservation, DCLMP for short, in which, in addition to the within-view supervision information, discriminative knowledge as well as spatial structural information are exploited to benefit subsequent decision making. To pursue a closed-form solution, we remodel the objective of DCLMP from the Euclidean space to a geodesic space and obtain a convex formulation of DCLMP (C-DCLMP). Finally, we have comprehensively evaluated the proposed methods and demonstrated their superiority on both toy and real datasets.
Citation: Qing Tian, Heng Zhang, Shiyu Xia, Heng Xu, Chuang Ma. Cross-view learning with scatters and manifold exploitation in geodesic space[J]. Electronic Research Archive, 2023, 31(9): 5425-5441. doi: 10.3934/era.2023275
[1] | Jingqian Xu, Ma Zhu, Baojun Qi, Jiangshan Li, Chunfang Yang . AENet: attention efficient network for cross-view image geo-localization. Electronic Research Archive, 2023, 31(7): 4119-4138. doi: 10.3934/era.2023210 |
[2] | Rui Wang, Haiqiang Li, Chen Hu, Xiao-Jun Wu, Yingfang Bao . Deep Grassmannian multiview subspace clustering with contrastive learning. Electronic Research Archive, 2024, 32(9): 5424-5450. doi: 10.3934/era.2024252 |
[3] | Shuaiqun Wang, Huiqiu Chen, Wei Kong, Xinqi Wu, Yafei Qian, Kai Wei . A modified FGL sparse canonical correlation analysis for the identification of Alzheimer's disease biomarkers. Electronic Research Archive, 2023, 31(2): 882-903. doi: 10.3934/era.2023044 |
[4] | Zhongnian Li, Jiayu Wang, Qingcong Geng, Xinzheng Xu . Group-based siamese self-supervised learning. Electronic Research Archive, 2024, 32(8): 4913-4925. doi: 10.3934/era.2024226 |
[5] | Li Sun, Bing Song . Feature adaptive multi-view hash for image search. Electronic Research Archive, 2023, 31(9): 5845-5865. doi: 10.3934/era.2023297 |
[6] | Qianpeng Xiao, Changbin Shao, Sen Xu, Xibei Yang, Hualong Yu . CCkEL: Compensation-based correlated k-labelsets for classifying imbalanced multi-label data. Electronic Research Archive, 2024, 32(5): 3038-3058. doi: 10.3934/era.2024139 |
[7] | Jie Zheng, Yijun Li . Machine learning model of tax arrears prediction based on knowledge graph. Electronic Research Archive, 2023, 31(7): 4057-4076. doi: 10.3934/era.2023206 |
[8] | Bojian Chen, Wenbin Wu, Zhezhou Li, Tengfei Han, Zhuolei Chen, Weihao Zhang . Attention-guided cross-modal multiple feature aggregation network for RGB-D salient object detection. Electronic Research Archive, 2024, 32(1): 643-669. doi: 10.3934/era.2024031 |
[9] | Hui Jiang, Di Wu, Xing Wei, Wenhao Jiang, Xiongbo Qing . Discriminator-free adversarial domain adaptation with information balance. Electronic Research Archive, 2025, 33(1): 210-230. doi: 10.3934/era.2025011 |
[10] | Jicheng Li, Beibei Liu, Hao-Tian Wu, Yongjian Hu, Chang-Tsun Li . Jointly learning and training: using style diversification to improve domain generalization for deepfake detection. Electronic Research Archive, 2024, 32(3): 1973-1997. doi: 10.3934/era.2024090 |
Cross-view data correlation analysis is a typical learning paradigm in machine learning and pattern recognition. To associate data from different views, many approaches to correlation learning have been proposed, among which canonical correlation analysis (CCA) is a representative. When data is associated with label information, CCA can be extended to a supervised version by embedding the supervision information. Although most variants of CCA have achieved good performance, nearly all of their objective functions are nonconvex, implying that their optimal solutions are difficult to obtain. More seriously, the discriminative scatters and manifold structures are not exploited simultaneously. To overcome these shortcomings, in this paper we construct a Discriminative Correlation Learning with Manifold Preservation, DCLMP for short, in which, in addition to the within-view supervision information, discriminative knowledge as well as spatial structural information are exploited to benefit subsequent decision making. To pursue a closed-form solution, we remodel the objective of DCLMP from the Euclidean space to a geodesic space and obtain a convex formulation of DCLMP (C-DCLMP). Finally, we have comprehensively evaluated the proposed methods and demonstrated their superiority on both toy and real datasets.
Correlation analysis deals with data with cross-view feature representations. To handle such tasks, many correlation learning approaches have been proposed, among which canonical correlation analysis (CCA) [1,2,3,4,5] is a representative method and has been widely employed [6,7,8,9,10,11]. To be specific, given training data with two or more feature-view representations, the traditional CCA method comes to seek a projection vector for each of the views while maximizing the cross-view correlations. After the data are mapped along the projection directions, subsequent cross-view decisions can be made [4]. Although CCA yields good results, a performance room is left since the data labels are not incorporated in learning.
When class labels information is also provided or available, CCA can be remodeled to its discriminant form by making use of the labels. To this end, Sun et al. [12] proposed a discriminative variant of CCA (i.e., DCCA) by enlarging distances between dissimilar samples while reducing those of similar samples. Subsequently, Peng et al. [13] built a locally-discriminative version of CCA (i.e., LDCCA) based on the assumption that the data distributions follow low-dimensional manifold embedding. Besides, Su et al. [14] established a multi-patch embedding CCA (MPECCA) by developing multiple metrics rather than a single one to model within-class scatters. Afterwards, Sun et al. [15] built a generalized framework for CCA (GCCA). Ji et al. [16] remodeled the scatter matrices by deconstructing them into several fractional-order components and achieved performance improvements.
In addition to directly constructing a label-exploited version of CCA, the supervised labels can be utilized by embedding them as regularization terms. Along this direction, Zhou et al. [17] presented CECCA by embedding LDA-guided [18] feature combinations into the objective function of CCA. Furthermore, Zhao et al. [19] constructed HSL-CCA by reducing inter-class scatters within their local neighborhoods. Later, Haghighat et al. [20] proposed the DCA model by deconstructing the inter-class scatter matrix guided by class labels. Previous variants of CCA were designed to cater for two-view data and cannot be used directly to handle multi-view scenarios. To overcome this shortcoming, many CCA methods have been proposed, such as GCA [21], MULDA [22] and FMDA [23].
Although the aforementioned methods have achieved successful performances of varying extent, unfortunately, the objective functions of nearly all of them are not convex [14,24,25]. Although CDCA [26] yields closed-form solutions and better results than the previous methods.
To overcome these shortcomings, we firstly design a discriminative correlation learning with manifold preservation, coined as DCLMP, in which, not only the cross-view discriminative information but also the spatial structural information of training data is taken into account to enhance subsequent decision making. To pursue closed-form solutions, we remodel the objective of DCLMP from the Euclidean space to a geodesic space. In this way, we obtain a convex formulation of DCLMP (C-DCLMP). Finally, we comprehensively evaluated the proposed methods and demonstrated their superiority on both toy and real data sets. To summarize, our contributions are three-fold as follows:
1. A DCLMP is constructed by modelling both cross-view discriminative information and spatial structural information of training data.
2. The objective function of DCLMP is remodelled to obtain its convex formulation (C-DCLMP).
3. The proposed methods are evaluated with extensive experimental comparisons.
This paper is organized as follows. Section 2 reviews related theories of CCA. Section 3 presents models and their solving algorithms. Then, experiments and comparisons are reported to evaluate the methods in Section 4. Section 5 concludes and provides future directions.
In this section, we briefly review the works on multi-view learning, which aims to study how to establish constraints or dependencies between views by modeling and discovering the interrelations between views. There exist studies about multi-view learning. Tang et al. [27] proposed a multi-view feature selection method named CvLP-DCL, which divided the label space into a consensus part and a domain-specific part and explored the latent information between different views in the label space. Additionally, CvLP-DCL explored how to combine cross-domain similarity graph learning with matrix-induced regularization to boost the performance of the model. Tang et al. [28] also proposed UoMvSc for multi-view learning, which mined the value of view-specific graphs and embedding matrices by combining spectral clustering with k-means clustering. In addition, Wang et al.[29] proposed an effective framework for multi-view learning named E2OMVC, which constructed the latent feature representation based on anchor graphs and the clustering indicator matrix about multi-view data to obtain better clustering results.
We briefly review related theories of CCA [1,2]. Given two-view feature representations of training data, CCA seeks two projection matrices respectively for the two views, while preserving the cross-view correlations. To be specific, let X=[x1,...,xN]∈Rp×N and Y=[y1,...,yN]∈Rq×N be two view representations of N training samples, with xi and yi denoting normalized representations of the ith sample. Besides, let Wx∈Rp×r and Wy∈Rq×r denote the projection matrices mapping the training data from individual view spaces into a r-dimensional common space. Then, the correlation between WTxxi and WTyyi should be maximized. Consequently, the formal objective of CCA can be formulated as
max{Wx,Wy}WTxCxyWy√WTxCxxWxWTyCyyWy, | (2.1) |
where Cxx=1N∑Ni=1(xi−¯x)(xi−¯x)T, Cyy=1N∑Ni=1(yi−¯y)(yi−¯y)T, and Cxy=1N∑Ni=1(xi−¯x)(yi−¯y)T, where ¯x=1N∑Ni=1xi and ¯y=1N∑Ni=1yi respectively denote the sample means of the two views. The numerator describes the sample correlation in the projected space, while the denominator limits the scatter for each view. Typically, Eq (2.1) is converted to a generalized eigenvalue problem as
(XYTYXT)(WxWy)=λ(XXTYYT)(WxWy). | (2.2) |
Then, (WxWy) can be achieved by computing the largest r eigenvectors of
(XXTYYT)−1(XYTYXT). |
After Wx and Wy are obtained, xi and yi can be concatenated as WTxxi+WTyyi=(WxWy)T(xiyi). With the concatenated feature representations are achieved, subsequent classification or regression decisions can be made.
The most classic work of discriminative CCA is DCCA [12], which is shown as follows:
maxwx,wy(wTxCwwy−η⋅wTxCbwy) s.t. wTxXXTwx=1,wTyYYTwy=1 | (2.3) |
It is easy to find that DCCA is discriminative because DCCA needs instance labels to calculate the relationship between each class. Similar to DCCA, Peng et al. [13] proposed LDCCA which is shown as follows:
maxwx,wywTxCxywy√(wTx˜Cxxwx)(wTyCyywy) s.t. wTxXXTwx=1,wTyYYTwy=1 | (2.4) |
where ˜Cxy=Cw−ηCb⋅Cw. Compared with DCCA, LDCCA consider the local correlations of the within-class sets and the between-class sets. However, these methods do not consider the problem of multimodal recognition or feature level fusion. Haghighat et al. [20] proposed DCA which incorporates the class structure, i.e., memberships of the samples in classes, into the correlation analysis. Additionally, Su et al. [14] proposed MPECCA for multi-view feature learning, which is shown as follows:
maxu,v,w(χ)j,w(y)ruT(N∑i=1M∑j=1M∑r=1(w(x)jw(y)r)XS(x)ijLiS(y)TirYT)v s.t. uTSwxu=1,vTSwyv=1M∑j=1w(x)j=1,w(x)j⩾0M∑r=1w(y)r=1,w(y)r⩾0 | (2.5) |
where u and v means correlation projection matrices. Considering combining LDA and CCA, CECCA was proposed [17]. The optimization objective of CECCA was shown as follows:
maxwx,wywTxX(I+2A)YTwy+wTxXATTwx+wTyYAXTwy s. t. wTxXXTwx+wTyYYTwy=2 | (2.6) |
where A=2U−I, I means Identity matrix. On the basis of CCA, CECCA combined with discriminant analysis to realize the joint optimization of correlation and discriminant of combined features, which makes the extracted features more suitable for classification. However, these methods cannot achieve the closed form solution. CDCA [26] combined GMML and discrimative CCA and then achieve the closed form solution in Riemannian manifold space, the optimization objective was shown as follows:
minA≻0γtr(AC)+(1−γ)(tr(ASZ)+tr(A−1DZ))=tr(A(γC+(1−γ)SZ))+tr(A−1(1−γ)DZ) | (2.7) |
From Eq (2.7) and CDCA [26] we can find with the help of discrimative part and closed form solution, the multi-view learning will easily get the the global optimality of solutions and achieve a good result.
CCA suffer from three main problems: (1) the similarity and dissimilarity across views are not modeled; (2) although the data labels can be exploited by imposing supervised constraints, their objective functions are nonconvex; (3) the cross-view correlations are modeled in Euclidean space through RKHS kernel transformation [30,31] whose discriminating ability is obviously limited.
We present a novel cross-view learning model, called DCLMP, in which not only the with-class and between-class scatters are characterized, but also the similarity and dissimilarity of the training data across views are modelled for utilization. Although many preferable characteristics are incorporated in DCLMP, it still suffers from non-convexity for its objective function. To facilitate pursuing global optimal solutions, we further remodel DCLMP to the Riemannian manifold space to make the objective function convex. The proposed method is named as C-DCLMP.
Assume we are given N training instances sampled from K classes with two views of feature representations, i.e., X=[X1,X2,⋅⋅⋅,XK]∈Rp×N with Xk=[xk1,xk2,⋅⋅⋅,xkNk] being Nk x-view instances from the k-th class and Y=[Y1,Y2,⋅⋅⋅,YK]∈Rq×N with Yk=[yk1,yk2,⋅⋅⋅,ykNk] being Nk y-view instances from the k-th class, where yk1 and xk1 stand for two view representations from the same instance. In order to concatenate them for subsequent classification, we denote U∈Rp×r and V∈Rq×r as projection matrices for the two views to transform their representations to a r-dimensional common space.
To perform cross-view learning while exploring supervision knowledge in terms of similar and dissimilar relationships among instances in each view and across the views, as well as sample distribution manifolds, we construct DCLMP. To this end, we should construct the model by taking into account the following aspects: 1) distances between similar instances from the same class should be reduced while those among dissimilar from different classes should be enlarged, in levels of intra-view and inter-view; 2) manifold structures embedded in similar and dissimilar instances should be preserved. These modelling considerations are intuitively demonstrated in Figure 1.
Along this line, we construct the objective function of DCLMP as follows:
min{U,V}1NN∑i=11NN∑j=1‖ | (3.1) |
where and denote the projection matrices in the -dimensional common space of two views and denotes the -nearest neighbors of an instance. is the discriminative weighting matrix. and stand for the within-class manifold weighting matrices of two different views of feature representations, and and stand for the between-class manifold weighting matrices of two different views of feature representations. Their elements are defined as follows
(3.2) |
(3.3) |
(3.4) |
(3.5) |
(3.6) |
where KNN denotes the -nearest neighbors of an instance. and stand for width coefficients to normalize the weights.
In Eq (3.1), the first part characterizes the cross-view similarity and dissimilarity discriminations, the second part preserves the manifold relationships within each class scatters, while the third part magnifies the distribution margins for a dissimilar pair of instances. In this way, both the discriminative information and the manifold distributions can be modelled in a joint objective function.
For convenience of solving Eq (3.1), we transform it as the following concise form
(3.7) |
with
(3.8) |
(3.9) |
(3.10) |
(3.11) |
where , , , , and .
We let record the objective function value of Eq (3.7) and introduce to replace and to rewrite Eq (3.7) as
(3.12) |
Calculating the partial derivative of with regard to Q and making it to zero yields
(3.13) |
The projection matrix can be obtained by calculating a required number of smallest eigenvectors of . Finally, we can recover . Then, and can be obtained through Eq (3.8).
We find that such a objective function may be not convex [32,33]. The separability of nonlinear data patterns in the geodesic space can be significantly improved and thus benefits their subsequent recognitions. Referring to [34], we reformulate DCLMP in (3.7) equivalently as
(3.14) |
Minimizing the third term is equivalent to minimizing of Eq (3.7). Although the last term is nonlinear, it is defined in the convex cone space [35] and thus is still convex. As a result, Eq (3.14) is entirely convex regarding . It enjoys closed-form solution [36,37,38]. To distinguish Eq (3.14) from DCLMP, we call it C-DCLMP.
For convenience of deriving the closed-form solution, we reformulate Eq (3.14) as
(3.15) |
where we set [34]. Let .
(3.16) |
whose solution is the midpoint of the geodesic jointing and , that is
(3.17) |
denotes the midpoint. We extend the geodesic mean solution (3.17) to the geodesic space by replacing with , .
We add a regularizer with prior knowledge to (3.15). Here, we incorporate symmetrized LogDet divergence and consequently (3.15) becomes
(3.18) |
(3.19) |
where + is the dimension of the data. Fortunately, complying with the definition of geometric mean [36], Eq (3.18) is still convex. We let . Then we set the gradient of regarding to to zero and obtain the equation as
(3.20) |
we calculate the closed-form solution as
(3.21) |
More precisely, according to the definition of , namely the geodesic mean jointing two matrices, we can directly expand the final solution of our C-DCLMP in Eq (3.18) as
(3.22) |
where we set to be a +-order identity matrix . When obtaining , and are recovered.
Its concatenated representation can be generated by and the classification decision using a classifier (e.g., KNN) can be made on this fused representation.
To comprehensively evaluate the proposed methods, we first performed comparative experiments on several benchmark and real face datasets. Besides, we also performed sensibility analysis on the model parameters.
For evaluation and comparisons, CCA [1], DCCA [12], MPECCA [14], CECCA [17], DCA [20] and CDCA [26] were implemented. All hyper-parameters were cross-validated in the range of [0, 0.1, ..., 1] for and , and [1e-7, 1e-6, ..., 1e3] for and . For concatenated cross-view representations, a -nearest-neighbors classifier was employed for classification. Additionally, recognition accuracy (%, higher is better) and mean absolute errors (MAE, lower is better) were adopted as performance measures.
We first performed experiments on several widely used non-face multi-view datasets, i.e., MFD [39] and USPS [40], AWA [41] and ADNI [42]. We report the results in Table 1.
Dataset | View Represenations | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) | |
MFD | fac | fou | 80.22 0.9 | 80.00 0.2 | 90.64 1.3 | 95.15 0.9 | 96.46 2.4 | 98.11 0.3 | 94.49 1.7 | 98.03 0.3 |
fac | kar | 92.12 0.5 | 90.10 0.8 | 95.39 0.6 | 95.33 0.7 | 96.52 1.2 | 97.06 0.4 | 96.86 0.5 | 97.93 0.6 | |
fac | mor | 78.22 0.8 | 63.22 4.3 | 72.32 2.4 | 95.22 0.9 | 94.23 1.0 | 98.13 0.3 | 90.97 2.5 | 97.63 0.3 | |
fac | pix | 83.02 1.2 | 90.20 0.5 | 94.65 0.5 | 65.60 1.1 | 93.67 2.9 | 97.52 0.4 | 97.45 0.5 | 97.21 0.4 | |
fac | zer | 84.00 0.6 | 71.50 2.2 | 93.79 0.7 | 96.00 0.6 | 97.04 0.6 | 97.03 0.4 | 95.98 0.3 | 97.75 0.4 | |
fou | kar | 90.11 1.0 | 75.42 5.6 | 93.98 0.4 | 89.12 4.3 | 96.90 0.5 | 97.19 0.6 | 97.45 0.4 | 97.45 0.3 | |
fou | mor | 70.22 0.4 | 55.82 4.6 | 60.62 1.6 | 82.30 0.9 | 78.25 0.6 | 83.81 0.7 | 82.09 1.0 | 84.80 0.6 | |
fou | pix | 68.44 0.4 | 76.10 4.7 | 78.24 1.1 | 90.41 3.2 | 76.28 1.3 | 96.11 0.5 | 97.62 0.4 | 97.74 0.3 | |
fou | zer | 74.10 0.9 | 62.80 4.1 | 79.38 1.2 | 79.53 4.5 | 83.16 1.4 | 85.98 0.9 | 85.33 1.1 | 86.56 1.0 | |
kar | mor | 64.09 0.6 | 82.00 1.6 | 72.92 2.7 | 91.95 2.8 | 91.89 0.6 | 97.28 0.5 | 96.83 0.5 | 97.14 0.4 | |
kar | pix | 88.37 0.9 | 88.85 0.8 | 95.07 0.6 | 92.59 2.0 | 95.98 0.3 | 94.68 0.5 | 97.54 0.4 | 97.31 0.5 | |
kar | zer | 90.77 1.0 | 75.97 2.8 | 94.17 0.6 | 88.47 2.9 | 93.57 0.9 | 96.69 0.4 | 96.98 0.4 | 97.42 0.4 | |
mor | pix | 68.66 1.5 | 82.01 2.1 | 67.21 2.3 | 93.04 0.7 | 90.08 1.0 | 96.89 0.4 | 97.20 0.5 | 97.19 0.4 | |
mor | zer | 73.22 0.6 | 50.35 1.8 | 60.95 1.4 | 84.55 0.9 | 80.59 0.9 | 84.19 0.8 | 81.75 1.1 | 84.29 0.7 | |
pix | zer | 82.46 0.6 | 71.16 2.8 | 82.81 1.2 | 91.67 2.1 | 91.81 1.2 | 96.30 0.5 | 97.35 0.5 | 97.30 0.5 | |
AWA | cq | lss | 73.11 2.1 | 62.08 0.3 | 76.19 1.0 | 70.51 1.3 | 77.53 1.7 | 87.80 2.8 | 89.03 1.4 | 89.80 1.2 |
cq | phog | 65.21 1.4 | 73.10 1.2 | 72.42 1.6 | 70.15 0.9 | 74.51 2.1 | 85.58 2.7 | 86.71 2.3 | 86.81 1.2 | |
cq | rgsift | 60.22 1.3 | 61.40 1.7 | 78.04 1.3 | 82.87 2.4 | 82.83 1.4 | 90.99 3.0 | 93.44 0.6 | 94.34 0.8 | |
cq | sift | 74.33 1.3 | 61.28 1.9 | 77.85 1.4 | 83.19 2.1 | 80.05 1.7 | 81.59 5.2 | 87.17 0.8 | 90.68 0.8 | |
cq | surf | 75.86 1.7 | 69.30 2.1 | 79.07 0.8 | 73.55 2.3 | 81.59 1.5 | 93.58 1.1 | 94.36 1.0 | 95.35 0.5 | |
lss | phog | 69.96 1.7 | 59.72 0.2 | 68.12 1.2 | 64.86 2.6 | 71.36 1.4 | 80.48 2.0 | 81.76 1.1 | 81.62 1.1 | |
lss | rgsift | 78.65 0.9 | 63.21 1.3 | 73.64 1.0 | 78.28 2.8 | 77.28 1.4 | 87.38 4.3 | 90.13 0.7 | 89.95 1.0 | |
lss | sift | 73.49 1.0 | 65.72 2.1 | 73.12 1.4 | 66.21 1.6 | 76.69 1.7 | 81.56 2.4 | 84.05 0.9 | 84.07 1.9 | |
lss | surf | 76.30 1.4 | 65.33 1.8 | 74.84 1.6 | 79.06 2.8 | 78.52 1.3 | 89.81 2.5 | 89.75 0.8 | 91.12 0.7 | |
phog | rgsift | 68.18 1.1 | 48.38 1.0 | 69.49 2.3 | 77.37 1.5 | 74.41 1.5 | 82.76 1.1 | 83.57 1.6 | 83.68 1.2 | |
phog | sift | 68.26 1.1 | 70.24 1.1 | 68.97 1.3 | 63.16 1.3 | 72.14 1.5 | 80.50 1.2 | 83.57 1.1 | 83.75 1.5 | |
phog | surf | 64.57 1.4 | 56.94 0.5 | 71.55 1.4 | 75.68 1.9 | 74.43 2.1 | 84.97 2.6 | 88.02 1.8 | 87.34 0.8 | |
rgsift | sift | 71.35 1.3 | 58.56 2.3 | 72.85 1.1 | 75.28 2.5 | 76.69 1.7 | 90.76 2.2 | 93.44 0.4 | 93.79 1.0 | |
rgsift | surf | 75.55 1.3 | 67.22 1.6 | 76.94 2.2 | 84.10 2.4 | 80.46 1.7 | 93.25 1.2 | 92.82 0.8 | 93.66 0.8 | |
sift | surf | 75.33 1.3 | 63.36 1.6 | 74.27 1.2 | 82.14 2.7 | 75.51 1.1 | 90.07 3.4 | 90.67 1.0 | 91.69 1.1 | |
ADNI | AV | FDG | 65.47 1.8 | 73.28 2.1 | 75.28 2.6 | 76.25 2.1 | 76.26 2.5 | 79.59 1.9 | 68.64 3.3 | 80.86 2.1 |
AV | VBM | 71.02 2.4 | 71.02 2.8 | 73.24 3.1 | 63.47 2.1 | 60.67 2.7 | 81.59 2.5 | 78.38 2.5 | 80.70 2.8 | |
FDG | VBM | 61.37 1.2 | 65.28 1.6 | 70.37 2.6 | 64.05 1.6 | 70.95 1.8 | 80.12 2.0 | 74.97 2.9 | 80.21 1.7 | |
USPS | left | right | 62.14 0.6 | 80.11 1.2 | 66.67 0.9 | 63.96 2.0 | 82.89 1.9 | 89.76 0.3 | 96.19 0.7 | 96.03 0.6 |
The proposed DCLMP method yielded the second-lowest estimation errors in most cases, slightly higher than the proposed C-DCLMP. The improvement achieved by C-DCLMP method is significant, especially on AWA and USPS datasets.
We also conducted age estimation experiments on AgeDB [43], CACD [44] and IMDB-WIKI [45]. These three databases are illustrated in Figure 2.
We extracted BIF [46] and HoG [47] feature vectors and reduced dimensions to 200 by PCA as two view representations. We randomly chose 50,100,150 samples for training. Also, we use VGG19 [48] and Resnet50 [45] to extract deep feature vectors from AgeDB, CACD and IMDB-WIKI databases. We report results in Tables 3, 5 and 6.
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 17.70 0.5 | 17.78 0.5 | 16.10 0.4 | 15.93 0.4 | 15.62 0.5 | 15.48 0.2 | 15.59 0.1 | 15.16 0.4 |
100 | 16.81 0.5 | 17.23 0.6 | 14.74 0.5 | 14.79 0.5 | 14.67 0.4 | 14.57 0.2 | 14.60 0.2 | 14.13 0.2 |
150 | 15.43 0.5 | 16.25 0.6 | 13.83 0.5 | 13.49 0.4 | 13.43 0.4 | 13.21 0.2 | 13.48 0.2 | 13.19 0.3 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 16.17 0.5 | 16.27 0.46 | 15.42 0.5 | 15.09 0.5 | 14.78 0.5 | 14.67 0.3 | 14.75 0.2 | 14.52 0.2 |
100 | 15.86 0.5 | 15.79 0.6 | 14.89 0.8 | 14.23 0.4 | 14.09 0.4 | 13.78 0.3 | 14.07 0.3 | 13.68 0.2 |
150 | 15.09 0.5 | 14.81 0.3 | 13.97 0.5 | 13.41 0.6 | 13.34 0.5 | 13.16 0.3 | 13.46 0.3 | 13.15 0.3 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 16.28 0.5 | 16.78 0.4 | 15.79 0.4 | 14.98 0.4 | 14.38 0.4 | 14.28 0.4 | 14.10 0.3 | 13.95 0.3 |
100 | 15.45 0.4 | 16.52 0.5 | 15.04 0.5 | 14.44 0.4 | 13.99 0.4 | 13.98 0.3 | 13.85 0.2 | 13.74 0.2 |
150 | 15.20 0.5 | 15.41 0.5 | 14.79 0.4 | 14.02 0.5 | 13.73 0.5 | 13.79 0.2 | 13.67 0.1 | 13.63 0.3 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 16.07 0.6 | 16.27 0.4 | 15.35 0.4 | 14.21 0.6 | 13.39 0.4 | 13.49 0.3 | 13.52 0.3 | 13.27 0.2 |
100 | 15.69 0.5 | 15.75 0.3 | 14.65 0.5 | 14.17 0.5 | 13.28 0.3 | 13.26 0.3 | 13.24 0.2 | 12.97 0.4 |
150 | 15.22 0.4 | 15.32 0.4 | 14.45 0.3 | 14.01 0.6 | 13.01 0.3 | 12.94 0.3 | 12.91 0.3 | 12.76 0.4 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 14.29 0.5 | 14.39 0.5 | 13.49 0.3 | 13.04 0.5 | 12.26 0.4 | 12.37 0.3 | 11.84 0.3 | 11.65 0.3 |
100 | 13.97 0.5 | 13.87 0.4 | 12.79 0.5 | 12.35 0.3 | 11.96 0.3 | 11.86 0.3 | 11.53 0.2 | 11.13 0.3 |
150 | 13.43 0.5 | 13.56 0.5 | 12.48 0.3 | 12.26 0.4 | 11.66 0.3 | 11.65 0.3 | 11.45 0.2 | 10.98 0.3 |
The estimation errors (MAEs) of all the methods reduced monotonically. The age MAEs of DCLMP are the second lowest, demonstrating the solidness of our modelling cross-view discriminative knowledge and data manifold structures. We can also observe that C-DCLMP yields the lowest estimation errors, demonstrating its effectiveness and superiority.
For the proposed methods, we performed parameter analysis , and involved in (3.21), respectively. Specifically, we conducted age estimation experiments on both AgeDB and CACD. The results are plotted in Figures 3–5.
Geometric weighting parameter of C-DCLMP: We find some interesting observations from Figure 3. That is, with increasing from 0 to 1, the estimation error descended first and then rose again. It shows that the similar manifolds within class and the inter-class data distributions are helpful in regularizing the model solution space.
Metric balance parameter of C-DCLMP: We can observe from Figure 4 that, the age estimation error (MAE) achieved the lowest values when . This observation illustrates that preserving the data cross-view discriminative knowledge and the manifold distributions is useful and helps improve the estimation precision.
Metric prior parameter of C-DCLMP: Figure 5 shows that, with increased value, age estimation error descended to its lowest around = 1e-1 and then increased steeply. It demonstrates that incorporating moderate metric prior knowledge can regularize the model solution positively, but excess prior knowledge may dominate the entire data rule and mislead the training of the model.
For the proposed methods and the comparison methods mentioned above, we performed time complexity analysis. Specifically, we conducted age estimation experiments on both AgeDB and CACD by choosing 100 samples from each class for training while taking the rest for testing, respectively. We reported the averaged results in Table 7.
Dataset | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
AgeDB | 0.10 0.12 | 0.06 0.05 | 0.41 0.03 | 0.18 0.03 | 0.52 0.02 | 0.11 0.10 | 57.74 0.71 | 54.94 0.32 |
CACD | 0.09 0.13 | 0.06 0.10 | 0.38 0.09 | 0.15 0.04 | 0.47 0.03 | 0.07 0.01 | 30.86 0.61 | 31.04 0.68 |
For the proposed methods, we performed ablation experiments. Specifically, we conducted age estimation experiments on both AgeDB and CACD. We repeated the experiment 10 times with random data partitions and reported the averaged results in Table 8. In Table 8, each referred part corresponds to Eq (3.7).
Dataset | First part | Second part | Third part | C-DCLMP (ours)(ours) |
AgeDB | 14.49 0.16 | |||
14.51 0.10 | ||||
14.47 0.22 | ||||
14.18 0.32 | ||||
CACD | 14.07 0.25 | |||
14.08 0.32 | ||||
14.04 0.21 | ||||
13.73 0.24 |
In this paper, we proposed a DCLMP, in which both the cross-view discriminative information and the spatial structural information of training data is taken into consideration to enhance subsequent decision making. To pursue closed-form solutions, we remodeled the objective of DCLMP to nonlinear geodesic space and consequently achieved its convex formulation (C-DCLMP). Finally, we evaluated the proposed methods and demonstrated their superiority on various benchmark and real face datasets. In the future, we will consider exploring the latent information of the unlabeled data from the feature and label level, and study how to combine related advanced multi-view learning methods to reduce the computational consumption of the model and further improve the generalization ability of the model in various scenarios.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This work was supported by the National Natural Science Foundation of China under Grant 62176128, the Open Projects Program of State Key Laboratory for Novel Software Technology of Nanjing University under Grant KFKT2022B06, the Fundamental Research Funds for the Central Universities No. NJ2022028, the Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD) fund, as well as the Qing Lan Project of the Jiangsu Province.
[1] |
P. L. Lai, C. Fyfe, Kernel and nonlinear canonical correlation analysis, International Journal of Neural Systems, Int. J. Neural Syst., 10 (2000), 365–377. https://doi.org/10.1142/S012906570000034X doi: 10.1142/S012906570000034X
![]() |
[2] |
D. R. Hardoon, S. Szedmak, J. Shawe-Taylor, Canonical correlation analysis: an overview with application to learning methods, Neural Comput., 16 (2004). https://doi.org/10.1162/0899766042321814 doi: 10.1162/0899766042321814
![]() |
[3] |
Q. Tian, C. Ma, M. Cao, S. Chen, H. Yin, A Convex Discriminant Semantic Correlation Analysis for Cross-View Recognition, IEEE Trans. Cybernetics, 52 (2020), 1–13. https://doi.org/10.1109/TCYB.2020.2988721 doi: 10.1109/TCYB.2020.2988721
![]() |
[4] |
Q. Tian, S. Xia, M. Cao, K. Chen, Reliable sensing data fusion through robust multiview prototype learning, IEEE Trans. Ind. Inform., 18 (2022), 2665–2673. https://doi.org/10.1109/TII.2021.3064358 doi: 10.1109/TII.2021.3064358
![]() |
[5] |
P. Zhuang, J. Wu, F. Porikli, C. Li, Underwater image enhancement with hyper-laplacian reflectance priors, IEEE Trans. Image Process., 31 (2022), 5442–5455. https://doi.org/10.1109/TIP.2022.3196546 doi: 10.1109/TIP.2022.3196546
![]() |
[6] |
V. Sindhwani, D. S. Rosenberg, An RKHS for multi-view learning and manifold co-regularization, IEEE Trans. Cybernetics, 99 (2020), 1–33. https://doi.org/10.1145/1390156.1390279 doi: 10.1145/1390156.1390279
![]() |
[7] | M. H. Quang, L. Bazzani, V. Murino, A unifying framework for vector-valued manifold regularization and multi-view learning, in Proceedings of the 30th International Conference on Machine Learning, (2013), 100–108. |
[8] |
J. Zhao, X. Xie, X. Xu, S. Sun, Multi-view learning overview: Recent progress and new challenges, Inform. Fusion, 38 (2017), 43–54. https://doi.org/10.1016/j.inffus.2017.02.007 doi: 10.1016/j.inffus.2017.02.007
![]() |
[9] |
D. Zhang, T. He, F. Zhang, Real-time human mobility modeling with multi-view learning, ACM Trans. Intell. Syst. Technol., 9 (2017), 1–25. https://doi.org/10.1145/3092692 doi: 10.1145/3092692
![]() |
[10] |
D. Zhai, H. Chang, S. Shan, X. Chen, W. Gao, Multiview metric learning with global consistency and local smoothness, ACM Trans. Intell. Syst. Technol., 3 (2012), 1–22. https://doi.org/10.1145/2168752.2168767 doi: 10.1145/2168752.2168767
![]() |
[11] |
P. Zhuang, X. Ding, Underwater image enhancement using an edge-preserving filtering retinex algorithm, Multimed. Tools Appl., 79 (2020), 17257–17277. https://doi.org/10.1007/s11042-019-08404-4 doi: 10.1007/s11042-019-08404-4
![]() |
[12] | T. Sun, S. Chen, J. Yang, P. Shi, A novel method of combined feature extraction for recognition, in 2008 Eighth IEEE International Conference on Data Mining, (2008), 1043–1048. https://doi.org/10.1109/ICDM.2008.28 |
[13] |
Y. Peng, D. Zhang, J. Zhang, A new canonical correlation analysis algorithm with local discrimination, Neural Process. Lett., 31 (2010), 1–15. https://doi.org/10.1007/s11063-009-9123-3 doi: 10.1007/s11063-009-9123-3
![]() |
[14] |
S. Su, H. Ge, Y. H. Yuan, Multi-patch embedding canonical correlation analysis for multi-view feature learning, J. Vis. Commun. Image R., 41 (2016), 47–57. https://doi.org/10.1016/j.jvcir.2016.09.004 doi: 10.1016/j.jvcir.2016.09.004
![]() |
[15] |
Q. S. Sun, Z. D. Liu, P. A. Heng, D. S. Xia, Rapid and brief communication: A theorem on the generalized canonical projective vectors, Pattern Recogn., 38 (2005), 449–452. https://doi.org/10.1016/j.patcog.2004.08.009 doi: 10.1016/j.patcog.2004.08.009
![]() |
[16] |
H. K. Ji, Q. S. Sun, Y. H. Yuan, Z. X. Ji, Fractional-order embedding supervised canonical correlations analysis with applications to feature extraction and recognition, Neural Process. Lett., 45 (2017), 279–297. https://doi.org/10.1007/s11063-016-9524-z doi: 10.1007/s11063-016-9524-z
![]() |
[17] | X. D. Zhou, X. H. Chen, S. C. Chen, Combined-feature-discriminability enhanced canonical correlation analysis, Pattern Recogn. Artif. Intell., 25 (2012), 285–291. |
[18] |
P. N. Belhumeur, J. P. Hespanha, D. J. Kriegman, Eigenfaces vs. fisherfaces: Recognition using class specific linear projection, IEEE Trans. Pattern Anal. Mach. Intell., 19 (1997), 711–720. https://doi.org/10.1109/34.598228 doi: 10.1109/34.598228
![]() |
[19] |
F. Zhao, L. Qiao, F. Shi, P. Yap, D. Shen, Feature fusion via hierarchical supervised local CCA for diagnosis of autism spectrum disorder, Brain Imaging Behav., 11 (2017), 1050–1060. https://doi.org/10.1007/s11682-016-9587-5 doi: 10.1007/s11682-016-9587-5
![]() |
[20] |
M. Haghighat, M. Abdel-Mottaleb, W. Alhalabi, Discriminant correlation analysis: Real-time feature level fusion for multimodal biometric recognition, IEEE Trans. Inform. Foren. Sec., 11 (2016), 1984–1996. https://doi.org/10.1109/TIFS.2016.2569061 doi: 10.1109/TIFS.2016.2569061
![]() |
[21] | A. Sharma, A. Kumar, H. Daume, D. W. Jacobs, Generalized multiview analysis: A discriminative latent space, in 2012 IEEE Conference on Computer Vision and Pattern Recognition, (2012), 2160–2167. https://doi.org/10.1109/CVPR.2012.6247923 |
[22] |
S. Sun, X. Xie, M. Yang, Multiview uncorrelated discriminant analysis, IEEE Trans. Cybernetics, 46 (2016), 3272–3284. https://doi.org/10.1109/TCYB.2015.2502248 doi: 10.1109/TCYB.2015.2502248
![]() |
[23] |
P. Hu, D. Peng, J. Guo, L. Zhen, Local feature based multi-view discriminant analysis, Knowl.-Based Syst., 149 (2018), 34–46. https://doi.org/10.1016/j.knosys.2018.02.008 doi: 10.1016/j.knosys.2018.02.008
![]() |
[24] |
X. Fu, K. Huang, M. Hong, N. D. Sidiropoulos, A. M. C. So, Scalable and flexible multiview MAX-VAR canonical correlation analysis, IEEE Trans. Signal Process., 65 (2017), 4150–4165. https://doi.org/10.1109/TSP.2017.2698365 doi: 10.1109/TSP.2017.2698365
![]() |
[25] |
D. Y. Gao, Canonical duality theory and solutions to constrained nonconvex quadratic programming, J. Global Optim., 29 (2004), 377–399. https://doi.org/10.1023/B:JOGO.0000048034.94449.e3 doi: 10.1023/B:JOGO.0000048034.94449.e3
![]() |
[26] |
J. Fan, S. Chen, Convex discriminant canonical correlation analysis, Pattern Recogn. Artif. Intell., 30 (2017), 740–746. https://doi.org/10.16451/j.cnki.issn1003-6059.201708008 doi: 10.16451/j.cnki.issn1003-6059.201708008
![]() |
[27] |
C. Tang, X. Zheng, X. Liu, W. Zhang, J. Zhang, J. Xiong, et al., Cross-view locality preserved diversity and consensus learning for multi-view unsupervised feature selection, IEEE Trans. Knowl. Data Eng., 34 (2022), 4705–4716. https://doi.org/10.1109/TKDE.2020.3048678 doi: 10.1109/TKDE.2020.3048678
![]() |
[28] |
C. Tang, Z. Li, J. Wang, X. Liu, W. Zhang, E. Zhu, Unified one-step multi-view spectral clustering, IEEE Trans. Knowl. Data Eng., 35 (2023), 6449–6460. https://doi.org/10.1109/TKDE.2022.3172687 doi: 10.1109/TKDE.2022.3172687
![]() |
[29] |
J. Wang, C. Tang, Z. Wan, W. Zhang, K. Sun, A. Y. Zomaya, Efficient and Effective One-Step Multiview Clustering, IEEE Trans. Neur. Net. Learn. Syst., (2023), 1–12. https://doi.org/10.1109/TNNLS.2023.3253246 doi: 10.1109/TNNLS.2023.3253246
![]() |
[30] | P. L. Lai, C. FyFe, KERNEL AND NONLINEAR CANONICAL CORRELATION ANALYSIS, International Journal of Neural Systems, 10 (2000), 365–377. |
[31] | K Fukumizu, FR Bach, A Gretton, Statistical consistency of kernel canonical correlation analysis, J. Mach. Learn. Res., 8 (2007), 361–383. |
[32] |
T. Liu, T. K. Pong, Further properties of the forward Cbackward envelope with applications to difference-of-convex programming, Comput. Optim. Appl., 67 (2017), 480–520. https://doi.org/10.1007/s10589-017-9900-2 doi: 10.1007/s10589-017-9900-2
![]() |
[33] |
T. P. Dinh, H. M. Le, H. A. Le Thi, F. Lauer, A difference of convex functions algorithm for switched linear regression, IEEE Trans. Automat. Contr., 59 (2014), 2277–2282. https://doi.org/10.1109/TAC.2014.2301575 doi: 10.1109/TAC.2014.2301575
![]() |
[34] | P. Zadeh, R. Hosseini, S. Sra, Geometric mean metric learning, in Proceedings of The 33rd International Conference on Machine Learning, (2016), 2464–2471. |
[35] | B. Stephen, V. Lieven, Convex optimization, Cambridge University Press, Cambridge, 2004. |
[36] |
V. Arsigny, P. Fillard, X. Pennec, N. Ayache, Geometric means in a novel vector space structure on symmetric positive-definite matrices, SIAM J. Matrix Anal. Appl., 29 (2007), 328–347. https://doi.org/10.1137/050637996 doi: 10.1137/050637996
![]() |
[37] | A. Papadopoulos, Metric Spaces, Convexity and Nonpositive Curvature, European Mathematical Society, Zurich, 2005. |
[38] |
T. Rapcsák, Geodesic convexity in nonlinear optimization, J. Optim. Theory Appl., 69 (1991), 169–183. https://doi.org/10.1007/BF00940467 doi: 10.1007/BF00940467
![]() |
[39] |
C. L. Liu, K. Nakashima, H. Sako, H. Fujisawa, Handwritten digit recognition: investigation of normalization and feature extraction techniques, Pattern Recogn., 37 (2004), 265–279. https://doi.org/10.1016/S0031-3203(03)00224-3 doi: 10.1016/S0031-3203(03)00224-3
![]() |
[40] | Pawlicki, D. S. Lee, Hull, Srihari, Neural network models and their application to handwritten digit recognition, in IEEE 1988 International Conference on Neural Networks, 2 (1988), 63–70. https://doi.org/10.1109/ICNN.1988.23913 |
[41] | C. H. Lampert, H. Nickisch, S. Harmeling, Learning to detect unseen object classes by between-class attribute transfer, in 2009 IEEE Conference on Computer Vision and Pattern Recognition, (2009), 951–958. https://doi.org/10.1109/CVPR.2009.5206594 |
[42] |
C. R. Jack, M. A. Bernstein, N. C. Fox, P. Thompson, G. Alexander, D. Harvey, et al., The Alzheimer's disease neuroimaging initiative (ADNI): MRI methods, J. Magn. Reson. Imaging, 27 (2008), 685–691. https://doi.org/10.1002/jmri.21049 doi: 10.1002/jmri.21049
![]() |
[43] | S. Moschoglou, A. Papaioannou, C. Sagonas, J. Deng, I. Kotsia, S. Zafeiriou, Agedb: the first manually collected, in-the-wild age database, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, (2017), 51–59. |
[44] | B. C. Chen, C. S. Chen, W. H. Hsu, Cross-age reference coding for age-invariant face recognition and retrieval, in Computer Vision – ECCV 2014., Springer, (2014), 768–783. https://doi.org/10.1007/978-3-319-10599-4_49 |
[45] |
R. Rothe, R. Timofte, L. Van Gool, Deep expectation of real and apparent age from a single image without facial landmarks, Int. J. Comput. Vis., 126 (2018), 144–157. https://doi.org/10.1007/s11263-016-0940-3 doi: 10.1007/s11263-016-0940-3
![]() |
[46] | G. Guo, G. Mu, Y. Fu, T. S. Huang, Human age estimation using bio-inspired features, in 2009 IEEE Conference on Computer Vision and Pattern Recognition, (2009), 112–119. https://doi.org/10.1109/CVPR.2009.5206681 |
[47] | Q. Zhu, M. C. Yeh, K. T. Cheng, S. Avidan, Fast human detection using a cascade of histograms of oriented gradients, in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), (2006), 1491–1498. https://doi.org/10.1109/CVPR.2006.119 |
[48] | K. Simonyan, A. Zisserma, Very deep convolutional networks for large-scale image recognition, preprint, arXiv: 1409.1556. |
Dataset | View Represenations | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) | |
MFD | fac | fou | 80.22 0.9 | 80.00 0.2 | 90.64 1.3 | 95.15 0.9 | 96.46 2.4 | 98.11 0.3 | 94.49 1.7 | 98.03 0.3 |
fac | kar | 92.12 0.5 | 90.10 0.8 | 95.39 0.6 | 95.33 0.7 | 96.52 1.2 | 97.06 0.4 | 96.86 0.5 | 97.93 0.6 | |
fac | mor | 78.22 0.8 | 63.22 4.3 | 72.32 2.4 | 95.22 0.9 | 94.23 1.0 | 98.13 0.3 | 90.97 2.5 | 97.63 0.3 | |
fac | pix | 83.02 1.2 | 90.20 0.5 | 94.65 0.5 | 65.60 1.1 | 93.67 2.9 | 97.52 0.4 | 97.45 0.5 | 97.21 0.4 | |
fac | zer | 84.00 0.6 | 71.50 2.2 | 93.79 0.7 | 96.00 0.6 | 97.04 0.6 | 97.03 0.4 | 95.98 0.3 | 97.75 0.4 | |
fou | kar | 90.11 1.0 | 75.42 5.6 | 93.98 0.4 | 89.12 4.3 | 96.90 0.5 | 97.19 0.6 | 97.45 0.4 | 97.45 0.3 | |
fou | mor | 70.22 0.4 | 55.82 4.6 | 60.62 1.6 | 82.30 0.9 | 78.25 0.6 | 83.81 0.7 | 82.09 1.0 | 84.80 0.6 | |
fou | pix | 68.44 0.4 | 76.10 4.7 | 78.24 1.1 | 90.41 3.2 | 76.28 1.3 | 96.11 0.5 | 97.62 0.4 | 97.74 0.3 | |
fou | zer | 74.10 0.9 | 62.80 4.1 | 79.38 1.2 | 79.53 4.5 | 83.16 1.4 | 85.98 0.9 | 85.33 1.1 | 86.56 1.0 | |
kar | mor | 64.09 0.6 | 82.00 1.6 | 72.92 2.7 | 91.95 2.8 | 91.89 0.6 | 97.28 0.5 | 96.83 0.5 | 97.14 0.4 | |
kar | pix | 88.37 0.9 | 88.85 0.8 | 95.07 0.6 | 92.59 2.0 | 95.98 0.3 | 94.68 0.5 | 97.54 0.4 | 97.31 0.5 | |
kar | zer | 90.77 1.0 | 75.97 2.8 | 94.17 0.6 | 88.47 2.9 | 93.57 0.9 | 96.69 0.4 | 96.98 0.4 | 97.42 0.4 | |
mor | pix | 68.66 1.5 | 82.01 2.1 | 67.21 2.3 | 93.04 0.7 | 90.08 1.0 | 96.89 0.4 | 97.20 0.5 | 97.19 0.4 | |
mor | zer | 73.22 0.6 | 50.35 1.8 | 60.95 1.4 | 84.55 0.9 | 80.59 0.9 | 84.19 0.8 | 81.75 1.1 | 84.29 0.7 | |
pix | zer | 82.46 0.6 | 71.16 2.8 | 82.81 1.2 | 91.67 2.1 | 91.81 1.2 | 96.30 0.5 | 97.35 0.5 | 97.30 0.5 | |
AWA | cq | lss | 73.11 2.1 | 62.08 0.3 | 76.19 1.0 | 70.51 1.3 | 77.53 1.7 | 87.80 2.8 | 89.03 1.4 | 89.80 1.2 |
cq | phog | 65.21 1.4 | 73.10 1.2 | 72.42 1.6 | 70.15 0.9 | 74.51 2.1 | 85.58 2.7 | 86.71 2.3 | 86.81 1.2 | |
cq | rgsift | 60.22 1.3 | 61.40 1.7 | 78.04 1.3 | 82.87 2.4 | 82.83 1.4 | 90.99 3.0 | 93.44 0.6 | 94.34 0.8 | |
cq | sift | 74.33 1.3 | 61.28 1.9 | 77.85 1.4 | 83.19 2.1 | 80.05 1.7 | 81.59 5.2 | 87.17 0.8 | 90.68 0.8 | |
cq | surf | 75.86 1.7 | 69.30 2.1 | 79.07 0.8 | 73.55 2.3 | 81.59 1.5 | 93.58 1.1 | 94.36 1.0 | 95.35 0.5 | |
lss | phog | 69.96 1.7 | 59.72 0.2 | 68.12 1.2 | 64.86 2.6 | 71.36 1.4 | 80.48 2.0 | 81.76 1.1 | 81.62 1.1 | |
lss | rgsift | 78.65 0.9 | 63.21 1.3 | 73.64 1.0 | 78.28 2.8 | 77.28 1.4 | 87.38 4.3 | 90.13 0.7 | 89.95 1.0 | |
lss | sift | 73.49 1.0 | 65.72 2.1 | 73.12 1.4 | 66.21 1.6 | 76.69 1.7 | 81.56 2.4 | 84.05 0.9 | 84.07 1.9 | |
lss | surf | 76.30 1.4 | 65.33 1.8 | 74.84 1.6 | 79.06 2.8 | 78.52 1.3 | 89.81 2.5 | 89.75 0.8 | 91.12 0.7 | |
phog | rgsift | 68.18 1.1 | 48.38 1.0 | 69.49 2.3 | 77.37 1.5 | 74.41 1.5 | 82.76 1.1 | 83.57 1.6 | 83.68 1.2 | |
phog | sift | 68.26 1.1 | 70.24 1.1 | 68.97 1.3 | 63.16 1.3 | 72.14 1.5 | 80.50 1.2 | 83.57 1.1 | 83.75 1.5 | |
phog | surf | 64.57 1.4 | 56.94 0.5 | 71.55 1.4 | 75.68 1.9 | 74.43 2.1 | 84.97 2.6 | 88.02 1.8 | 87.34 0.8 | |
rgsift | sift | 71.35 1.3 | 58.56 2.3 | 72.85 1.1 | 75.28 2.5 | 76.69 1.7 | 90.76 2.2 | 93.44 0.4 | 93.79 1.0 | |
rgsift | surf | 75.55 1.3 | 67.22 1.6 | 76.94 2.2 | 84.10 2.4 | 80.46 1.7 | 93.25 1.2 | 92.82 0.8 | 93.66 0.8 | |
sift | surf | 75.33 1.3 | 63.36 1.6 | 74.27 1.2 | 82.14 2.7 | 75.51 1.1 | 90.07 3.4 | 90.67 1.0 | 91.69 1.1 | |
ADNI | AV | FDG | 65.47 1.8 | 73.28 2.1 | 75.28 2.6 | 76.25 2.1 | 76.26 2.5 | 79.59 1.9 | 68.64 3.3 | 80.86 2.1 |
AV | VBM | 71.02 2.4 | 71.02 2.8 | 73.24 3.1 | 63.47 2.1 | 60.67 2.7 | 81.59 2.5 | 78.38 2.5 | 80.70 2.8 | |
FDG | VBM | 61.37 1.2 | 65.28 1.6 | 70.37 2.6 | 64.05 1.6 | 70.95 1.8 | 80.12 2.0 | 74.97 2.9 | 80.21 1.7 | |
USPS | left | right | 62.14 0.6 | 80.11 1.2 | 66.67 0.9 | 63.96 2.0 | 82.89 1.9 | 89.76 0.3 | 96.19 0.7 | 96.03 0.6 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 17.70 0.5 | 17.78 0.5 | 16.10 0.4 | 15.93 0.4 | 15.62 0.5 | 15.48 0.2 | 15.59 0.1 | 15.16 0.4 |
100 | 16.81 0.5 | 17.23 0.6 | 14.74 0.5 | 14.79 0.5 | 14.67 0.4 | 14.57 0.2 | 14.60 0.2 | 14.13 0.2 |
150 | 15.43 0.5 | 16.25 0.6 | 13.83 0.5 | 13.49 0.4 | 13.43 0.4 | 13.21 0.2 | 13.48 0.2 | 13.19 0.3 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 16.17 0.5 | 16.27 0.46 | 15.42 0.5 | 15.09 0.5 | 14.78 0.5 | 14.67 0.3 | 14.75 0.2 | 14.52 0.2 |
100 | 15.86 0.5 | 15.79 0.6 | 14.89 0.8 | 14.23 0.4 | 14.09 0.4 | 13.78 0.3 | 14.07 0.3 | 13.68 0.2 |
150 | 15.09 0.5 | 14.81 0.3 | 13.97 0.5 | 13.41 0.6 | 13.34 0.5 | 13.16 0.3 | 13.46 0.3 | 13.15 0.3 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 16.28 0.5 | 16.78 0.4 | 15.79 0.4 | 14.98 0.4 | 14.38 0.4 | 14.28 0.4 | 14.10 0.3 | 13.95 0.3 |
100 | 15.45 0.4 | 16.52 0.5 | 15.04 0.5 | 14.44 0.4 | 13.99 0.4 | 13.98 0.3 | 13.85 0.2 | 13.74 0.2 |
150 | 15.20 0.5 | 15.41 0.5 | 14.79 0.4 | 14.02 0.5 | 13.73 0.5 | 13.79 0.2 | 13.67 0.1 | 13.63 0.3 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 16.07 0.6 | 16.27 0.4 | 15.35 0.4 | 14.21 0.6 | 13.39 0.4 | 13.49 0.3 | 13.52 0.3 | 13.27 0.2 |
100 | 15.69 0.5 | 15.75 0.3 | 14.65 0.5 | 14.17 0.5 | 13.28 0.3 | 13.26 0.3 | 13.24 0.2 | 12.97 0.4 |
150 | 15.22 0.4 | 15.32 0.4 | 14.45 0.3 | 14.01 0.6 | 13.01 0.3 | 12.94 0.3 | 12.91 0.3 | 12.76 0.4 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 14.29 0.5 | 14.39 0.5 | 13.49 0.3 | 13.04 0.5 | 12.26 0.4 | 12.37 0.3 | 11.84 0.3 | 11.65 0.3 |
100 | 13.97 0.5 | 13.87 0.4 | 12.79 0.5 | 12.35 0.3 | 11.96 0.3 | 11.86 0.3 | 11.53 0.2 | 11.13 0.3 |
150 | 13.43 0.5 | 13.56 0.5 | 12.48 0.3 | 12.26 0.4 | 11.66 0.3 | 11.65 0.3 | 11.45 0.2 | 10.98 0.3 |
Dataset | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
AgeDB | 0.10 0.12 | 0.06 0.05 | 0.41 0.03 | 0.18 0.03 | 0.52 0.02 | 0.11 0.10 | 57.74 0.71 | 54.94 0.32 |
CACD | 0.09 0.13 | 0.06 0.10 | 0.38 0.09 | 0.15 0.04 | 0.47 0.03 | 0.07 0.01 | 30.86 0.61 | 31.04 0.68 |
Dataset | First part | Second part | Third part | C-DCLMP (ours)(ours) |
AgeDB | 14.49 0.16 | |||
14.51 0.10 | ||||
14.47 0.22 | ||||
14.18 0.32 | ||||
CACD | 14.07 0.25 | |||
14.08 0.32 | ||||
14.04 0.21 | ||||
13.73 0.24 |
Dataset | View Represenations | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) | |
MFD | fac | fou | 80.22 0.9 | 80.00 0.2 | 90.64 1.3 | 95.15 0.9 | 96.46 2.4 | 98.11 0.3 | 94.49 1.7 | 98.03 0.3 |
fac | kar | 92.12 0.5 | 90.10 0.8 | 95.39 0.6 | 95.33 0.7 | 96.52 1.2 | 97.06 0.4 | 96.86 0.5 | 97.93 0.6 | |
fac | mor | 78.22 0.8 | 63.22 4.3 | 72.32 2.4 | 95.22 0.9 | 94.23 1.0 | 98.13 0.3 | 90.97 2.5 | 97.63 0.3 | |
fac | pix | 83.02 1.2 | 90.20 0.5 | 94.65 0.5 | 65.60 1.1 | 93.67 2.9 | 97.52 0.4 | 97.45 0.5 | 97.21 0.4 | |
fac | zer | 84.00 0.6 | 71.50 2.2 | 93.79 0.7 | 96.00 0.6 | 97.04 0.6 | 97.03 0.4 | 95.98 0.3 | 97.75 0.4 | |
fou | kar | 90.11 1.0 | 75.42 5.6 | 93.98 0.4 | 89.12 4.3 | 96.90 0.5 | 97.19 0.6 | 97.45 0.4 | 97.45 0.3 | |
fou | mor | 70.22 0.4 | 55.82 4.6 | 60.62 1.6 | 82.30 0.9 | 78.25 0.6 | 83.81 0.7 | 82.09 1.0 | 84.80 0.6 | |
fou | pix | 68.44 0.4 | 76.10 4.7 | 78.24 1.1 | 90.41 3.2 | 76.28 1.3 | 96.11 0.5 | 97.62 0.4 | 97.74 0.3 | |
fou | zer | 74.10 0.9 | 62.80 4.1 | 79.38 1.2 | 79.53 4.5 | 83.16 1.4 | 85.98 0.9 | 85.33 1.1 | 86.56 1.0 | |
kar | mor | 64.09 0.6 | 82.00 1.6 | 72.92 2.7 | 91.95 2.8 | 91.89 0.6 | 97.28 0.5 | 96.83 0.5 | 97.14 0.4 | |
kar | pix | 88.37 0.9 | 88.85 0.8 | 95.07 0.6 | 92.59 2.0 | 95.98 0.3 | 94.68 0.5 | 97.54 0.4 | 97.31 0.5 | |
kar | zer | 90.77 1.0 | 75.97 2.8 | 94.17 0.6 | 88.47 2.9 | 93.57 0.9 | 96.69 0.4 | 96.98 0.4 | 97.42 0.4 | |
mor | pix | 68.66 1.5 | 82.01 2.1 | 67.21 2.3 | 93.04 0.7 | 90.08 1.0 | 96.89 0.4 | 97.20 0.5 | 97.19 0.4 | |
mor | zer | 73.22 0.6 | 50.35 1.8 | 60.95 1.4 | 84.55 0.9 | 80.59 0.9 | 84.19 0.8 | 81.75 1.1 | 84.29 0.7 | |
pix | zer | 82.46 0.6 | 71.16 2.8 | 82.81 1.2 | 91.67 2.1 | 91.81 1.2 | 96.30 0.5 | 97.35 0.5 | 97.30 0.5 | |
AWA | cq | lss | 73.11 2.1 | 62.08 0.3 | 76.19 1.0 | 70.51 1.3 | 77.53 1.7 | 87.80 2.8 | 89.03 1.4 | 89.80 1.2 |
cq | phog | 65.21 1.4 | 73.10 1.2 | 72.42 1.6 | 70.15 0.9 | 74.51 2.1 | 85.58 2.7 | 86.71 2.3 | 86.81 1.2 | |
cq | rgsift | 60.22 1.3 | 61.40 1.7 | 78.04 1.3 | 82.87 2.4 | 82.83 1.4 | 90.99 3.0 | 93.44 0.6 | 94.34 0.8 | |
cq | sift | 74.33 1.3 | 61.28 1.9 | 77.85 1.4 | 83.19 2.1 | 80.05 1.7 | 81.59 5.2 | 87.17 0.8 | 90.68 0.8 | |
cq | surf | 75.86 1.7 | 69.30 2.1 | 79.07 0.8 | 73.55 2.3 | 81.59 1.5 | 93.58 1.1 | 94.36 1.0 | 95.35 0.5 | |
lss | phog | 69.96 1.7 | 59.72 0.2 | 68.12 1.2 | 64.86 2.6 | 71.36 1.4 | 80.48 2.0 | 81.76 1.1 | 81.62 1.1 | |
lss | rgsift | 78.65 0.9 | 63.21 1.3 | 73.64 1.0 | 78.28 2.8 | 77.28 1.4 | 87.38 4.3 | 90.13 0.7 | 89.95 1.0 | |
lss | sift | 73.49 1.0 | 65.72 2.1 | 73.12 1.4 | 66.21 1.6 | 76.69 1.7 | 81.56 2.4 | 84.05 0.9 | 84.07 1.9 | |
lss | surf | 76.30 1.4 | 65.33 1.8 | 74.84 1.6 | 79.06 2.8 | 78.52 1.3 | 89.81 2.5 | 89.75 0.8 | 91.12 0.7 | |
phog | rgsift | 68.18 1.1 | 48.38 1.0 | 69.49 2.3 | 77.37 1.5 | 74.41 1.5 | 82.76 1.1 | 83.57 1.6 | 83.68 1.2 | |
phog | sift | 68.26 1.1 | 70.24 1.1 | 68.97 1.3 | 63.16 1.3 | 72.14 1.5 | 80.50 1.2 | 83.57 1.1 | 83.75 1.5 | |
phog | surf | 64.57 1.4 | 56.94 0.5 | 71.55 1.4 | 75.68 1.9 | 74.43 2.1 | 84.97 2.6 | 88.02 1.8 | 87.34 0.8 | |
rgsift | sift | 71.35 1.3 | 58.56 2.3 | 72.85 1.1 | 75.28 2.5 | 76.69 1.7 | 90.76 2.2 | 93.44 0.4 | 93.79 1.0 | |
rgsift | surf | 75.55 1.3 | 67.22 1.6 | 76.94 2.2 | 84.10 2.4 | 80.46 1.7 | 93.25 1.2 | 92.82 0.8 | 93.66 0.8 | |
sift | surf | 75.33 1.3 | 63.36 1.6 | 74.27 1.2 | 82.14 2.7 | 75.51 1.1 | 90.07 3.4 | 90.67 1.0 | 91.69 1.1 | |
ADNI | AV | FDG | 65.47 1.8 | 73.28 2.1 | 75.28 2.6 | 76.25 2.1 | 76.26 2.5 | 79.59 1.9 | 68.64 3.3 | 80.86 2.1 |
AV | VBM | 71.02 2.4 | 71.02 2.8 | 73.24 3.1 | 63.47 2.1 | 60.67 2.7 | 81.59 2.5 | 78.38 2.5 | 80.70 2.8 | |
FDG | VBM | 61.37 1.2 | 65.28 1.6 | 70.37 2.6 | 64.05 1.6 | 70.95 1.8 | 80.12 2.0 | 74.97 2.9 | 80.21 1.7 | |
USPS | left | right | 62.14 0.6 | 80.11 1.2 | 66.67 0.9 | 63.96 2.0 | 82.89 1.9 | 89.76 0.3 | 96.19 0.7 | 96.03 0.6 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 17.70 0.5 | 17.78 0.5 | 16.10 0.4 | 15.93 0.4 | 15.62 0.5 | 15.48 0.2 | 15.59 0.1 | 15.16 0.4 |
100 | 16.81 0.5 | 17.23 0.6 | 14.74 0.5 | 14.79 0.5 | 14.67 0.4 | 14.57 0.2 | 14.60 0.2 | 14.13 0.2 |
150 | 15.43 0.5 | 16.25 0.6 | 13.83 0.5 | 13.49 0.4 | 13.43 0.4 | 13.21 0.2 | 13.48 0.2 | 13.19 0.3 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 16.17 0.5 | 16.27 0.46 | 15.42 0.5 | 15.09 0.5 | 14.78 0.5 | 14.67 0.3 | 14.75 0.2 | 14.52 0.2 |
100 | 15.86 0.5 | 15.79 0.6 | 14.89 0.8 | 14.23 0.4 | 14.09 0.4 | 13.78 0.3 | 14.07 0.3 | 13.68 0.2 |
150 | 15.09 0.5 | 14.81 0.3 | 13.97 0.5 | 13.41 0.6 | 13.34 0.5 | 13.16 0.3 | 13.46 0.3 | 13.15 0.3 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 16.28 0.5 | 16.78 0.4 | 15.79 0.4 | 14.98 0.4 | 14.38 0.4 | 14.28 0.4 | 14.10 0.3 | 13.95 0.3 |
100 | 15.45 0.4 | 16.52 0.5 | 15.04 0.5 | 14.44 0.4 | 13.99 0.4 | 13.98 0.3 | 13.85 0.2 | 13.74 0.2 |
150 | 15.20 0.5 | 15.41 0.5 | 14.79 0.4 | 14.02 0.5 | 13.73 0.5 | 13.79 0.2 | 13.67 0.1 | 13.63 0.3 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 16.07 0.6 | 16.27 0.4 | 15.35 0.4 | 14.21 0.6 | 13.39 0.4 | 13.49 0.3 | 13.52 0.3 | 13.27 0.2 |
100 | 15.69 0.5 | 15.75 0.3 | 14.65 0.5 | 14.17 0.5 | 13.28 0.3 | 13.26 0.3 | 13.24 0.2 | 12.97 0.4 |
150 | 15.22 0.4 | 15.32 0.4 | 14.45 0.3 | 14.01 0.6 | 13.01 0.3 | 12.94 0.3 | 12.91 0.3 | 12.76 0.4 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 14.29 0.5 | 14.39 0.5 | 13.49 0.3 | 13.04 0.5 | 12.26 0.4 | 12.37 0.3 | 11.84 0.3 | 11.65 0.3 |
100 | 13.97 0.5 | 13.87 0.4 | 12.79 0.5 | 12.35 0.3 | 11.96 0.3 | 11.86 0.3 | 11.53 0.2 | 11.13 0.3 |
150 | 13.43 0.5 | 13.56 0.5 | 12.48 0.3 | 12.26 0.4 | 11.66 0.3 | 11.65 0.3 | 11.45 0.2 | 10.98 0.3 |
Dataset | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
AgeDB | 0.10 0.12 | 0.06 0.05 | 0.41 0.03 | 0.18 0.03 | 0.52 0.02 | 0.11 0.10 | 57.74 0.71 | 54.94 0.32 |
CACD | 0.09 0.13 | 0.06 0.10 | 0.38 0.09 | 0.15 0.04 | 0.47 0.03 | 0.07 0.01 | 30.86 0.61 | 31.04 0.68 |
Dataset | First part | Second part | Third part | C-DCLMP (ours)(ours) |
AgeDB | 14.49 0.16 | |||
14.51 0.10 | ||||
14.47 0.22 | ||||
14.18 0.32 | ||||
CACD | 14.07 0.25 | |||
14.08 0.32 | ||||
14.04 0.21 | ||||
13.73 0.24 |