Loading [MathJax]/jax/element/mml/optable/SuppMathOperators.js
Research article

Uncertainty distributions of solutions to nabla Caputo uncertain difference equations and application to a logistic model

  • The nabla fractional-order uncertain difference equation with Caputo-type was analyzed in this article. To begin, the existence and uniqueness theorem of solutions for nabla Caputo uncertain difference equations with almost surely bounded uncertain variables was presented. Furthermore, the uncertainty distributions of the solutions for the proposed equations were obtained by establishing a connection between the solutions of equations and their α-paths based on new comparison theorems. Finally, an application of the uncertain difference equations in a logistic population model involving Allee effect was provided and examples were performed to demonstrate the validity of the theoretical results presented.

    Citation: Qinyun Lu, Ya Li, Hai Zhang, Hongmei Zhang. Uncertainty distributions of solutions to nabla Caputo uncertain difference equations and application to a logistic model[J]. AIMS Mathematics, 2024, 9(9): 23752-23769. doi: 10.3934/math.20241154

    Related Papers:

    [1] Jingqian Xu, Ma Zhu, Baojun Qi, Jiangshan Li, Chunfang Yang . AENet: attention efficient network for cross-view image geo-localization. Electronic Research Archive, 2023, 31(7): 4119-4138. doi: 10.3934/era.2023210
    [2] Rui Wang, Haiqiang Li, Chen Hu, Xiao-Jun Wu, Yingfang Bao . Deep Grassmannian multiview subspace clustering with contrastive learning. Electronic Research Archive, 2024, 32(9): 5424-5450. doi: 10.3934/era.2024252
    [3] Shuaiqun Wang, Huiqiu Chen, Wei Kong, Xinqi Wu, Yafei Qian, Kai Wei . A modified FGL sparse canonical correlation analysis for the identification of Alzheimer's disease biomarkers. Electronic Research Archive, 2023, 31(2): 882-903. doi: 10.3934/era.2023044
    [4] Zhongnian Li, Jiayu Wang, Qingcong Geng, Xinzheng Xu . Group-based siamese self-supervised learning. Electronic Research Archive, 2024, 32(8): 4913-4925. doi: 10.3934/era.2024226
    [5] Li Sun, Bing Song . Feature adaptive multi-view hash for image search. Electronic Research Archive, 2023, 31(9): 5845-5865. doi: 10.3934/era.2023297
    [6] Qianpeng Xiao, Changbin Shao, Sen Xu, Xibei Yang, Hualong Yu . CCkEL: Compensation-based correlated k-labelsets for classifying imbalanced multi-label data. Electronic Research Archive, 2024, 32(5): 3038-3058. doi: 10.3934/era.2024139
    [7] Jie Zheng, Yijun Li . Machine learning model of tax arrears prediction based on knowledge graph. Electronic Research Archive, 2023, 31(7): 4057-4076. doi: 10.3934/era.2023206
    [8] Bojian Chen, Wenbin Wu, Zhezhou Li, Tengfei Han, Zhuolei Chen, Weihao Zhang . Attention-guided cross-modal multiple feature aggregation network for RGB-D salient object detection. Electronic Research Archive, 2024, 32(1): 643-669. doi: 10.3934/era.2024031
    [9] Hui Jiang, Di Wu, Xing Wei, Wenhao Jiang, Xiongbo Qing . Discriminator-free adversarial domain adaptation with information balance. Electronic Research Archive, 2025, 33(1): 210-230. doi: 10.3934/era.2025011
    [10] Jicheng Li, Beibei Liu, Hao-Tian Wu, Yongjian Hu, Chang-Tsun Li . Jointly learning and training: using style diversification to improve domain generalization for deepfake detection. Electronic Research Archive, 2024, 32(3): 1973-1997. doi: 10.3934/era.2024090
  • The nabla fractional-order uncertain difference equation with Caputo-type was analyzed in this article. To begin, the existence and uniqueness theorem of solutions for nabla Caputo uncertain difference equations with almost surely bounded uncertain variables was presented. Furthermore, the uncertainty distributions of the solutions for the proposed equations were obtained by establishing a connection between the solutions of equations and their α-paths based on new comparison theorems. Finally, an application of the uncertain difference equations in a logistic population model involving Allee effect was provided and examples were performed to demonstrate the validity of the theoretical results presented.



    Correlation analysis deals with data with cross-view feature representations. To handle such tasks, many correlation learning approaches have been proposed, among which canonical correlation analysis (CCA) [1,2,3,4,5] is a representative method and has been widely employed [6,7,8,9,10,11]. To be specific, given training data with two or more feature-view representations, the traditional CCA method comes to seek a projection vector for each of the views while maximizing the cross-view correlations. After the data are mapped along the projection directions, subsequent cross-view decisions can be made [4]. Although CCA yields good results, a performance room is left since the data labels are not incorporated in learning.

    When class labels information is also provided or available, CCA can be remodeled to its discriminant form by making use of the labels. To this end, Sun et al. [12] proposed a discriminative variant of CCA (i.e., DCCA) by enlarging distances between dissimilar samples while reducing those of similar samples. Subsequently, Peng et al. [13] built a locally-discriminative version of CCA (i.e., LDCCA) based on the assumption that the data distributions follow low-dimensional manifold embedding. Besides, Su et al. [14] established a multi-patch embedding CCA (MPECCA) by developing multiple metrics rather than a single one to model within-class scatters. Afterwards, Sun et al. [15] built a generalized framework for CCA (GCCA). Ji et al. [16] remodeled the scatter matrices by deconstructing them into several fractional-order components and achieved performance improvements.

    In addition to directly constructing a label-exploited version of CCA, the supervised labels can be utilized by embedding them as regularization terms. Along this direction, Zhou et al. [17] presented CECCA by embedding LDA-guided [18] feature combinations into the objective function of CCA. Furthermore, Zhao et al. [19] constructed HSL-CCA by reducing inter-class scatters within their local neighborhoods. Later, Haghighat et al. [20] proposed the DCA model by deconstructing the inter-class scatter matrix guided by class labels. Previous variants of CCA were designed to cater for two-view data and cannot be used directly to handle multi-view scenarios. To overcome this shortcoming, many CCA methods have been proposed, such as GCA [21], MULDA [22] and FMDA [23].

    Although the aforementioned methods have achieved successful performances of varying extent, unfortunately, the objective functions of nearly all of them are not convex [14,24,25]. Although CDCA [26] yields closed-form solutions and better results than the previous methods.

    To overcome these shortcomings, we firstly design a discriminative correlation learning with manifold preservation, coined as DCLMP, in which, not only the cross-view discriminative information but also the spatial structural information of training data is taken into account to enhance subsequent decision making. To pursue closed-form solutions, we remodel the objective of DCLMP from the Euclidean space to a geodesic space. In this way, we obtain a convex formulation of DCLMP (C-DCLMP). Finally, we comprehensively evaluated the proposed methods and demonstrated their superiority on both toy and real data sets. To summarize, our contributions are three-fold as follows:

    1. A DCLMP is constructed by modelling both cross-view discriminative information and spatial structural information of training data.

    2. The objective function of DCLMP is remodelled to obtain its convex formulation (C-DCLMP).

    3. The proposed methods are evaluated with extensive experimental comparisons.

    This paper is organized as follows. Section 2 reviews related theories of CCA. Section 3 presents models and their solving algorithms. Then, experiments and comparisons are reported to evaluate the methods in Section 4. Section 5 concludes and provides future directions.

    In this section, we briefly review the works on multi-view learning, which aims to study how to establish constraints or dependencies between views by modeling and discovering the interrelations between views. There exist studies about multi-view learning. Tang et al. [27] proposed a multi-view feature selection method named CvLP-DCL, which divided the label space into a consensus part and a domain-specific part and explored the latent information between different views in the label space. Additionally, CvLP-DCL explored how to combine cross-domain similarity graph learning with matrix-induced regularization to boost the performance of the model. Tang et al. [28] also proposed UoMvSc for multi-view learning, which mined the value of view-specific graphs and embedding matrices by combining spectral clustering with k-means clustering. In addition, Wang et al.[29] proposed an effective framework for multi-view learning named E2OMVC, which constructed the latent feature representation based on anchor graphs and the clustering indicator matrix about multi-view data to obtain better clustering results.

    We briefly review related theories of CCA [1,2]. Given two-view feature representations of training data, CCA seeks two projection matrices respectively for the two views, while preserving the cross-view correlations. To be specific, let X=[x1,...,xN]Rp×N and Y=[y1,...,yN]Rq×N be two view representations of N training samples, with xi and yi denoting normalized representations of the ith sample. Besides, let WxRp×r and WyRq×r denote the projection matrices mapping the training data from individual view spaces into a r-dimensional common space. Then, the correlation between WTxxi and WTyyi should be maximized. Consequently, the formal objective of CCA can be formulated as

    max{Wx,Wy}WTxCxyWyWTxCxxWxWTyCyyWy, (2.1)

    where Cxx=1NNi=1(xi¯x)(xi¯x)T, Cyy=1NNi=1(yi¯y)(yi¯y)T, and Cxy=1NNi=1(xi¯x)(yi¯y)T, where ¯x=1NNi=1xi and ¯y=1NNi=1yi respectively denote the sample means of the two views. The numerator describes the sample correlation in the projected space, while the denominator limits the scatter for each view. Typically, Eq (2.1) is converted to a generalized eigenvalue problem as

    (XYTYXT)(WxWy)=λ(XXTYYT)(WxWy). (2.2)

    Then, (WxWy) can be achieved by computing the largest r eigenvectors of

    (XXTYYT)1(XYTYXT).

    After Wx and Wy are obtained, xi and yi can be concatenated as WTxxi+WTyyi=(WxWy)T(xiyi). With the concatenated feature representations are achieved, subsequent classification or regression decisions can be made.

    The most classic work of discriminative CCA is DCCA [12], which is shown as follows:

    maxwx,wy(wTxCwwyηwTxCbwy) s.t. wTxXXTwx=1,wTyYYTwy=1 (2.3)

    It is easy to find that DCCA is discriminative because DCCA needs instance labels to calculate the relationship between each class. Similar to DCCA, Peng et al. [13] proposed LDCCA which is shown as follows:

    maxwx,wywTxCxywy(wTx˜Cxxwx)(wTyCyywy) s.t. wTxXXTwx=1,wTyYYTwy=1 (2.4)

    where ˜Cxy=CwηCbCw. Compared with DCCA, LDCCA consider the local correlations of the within-class sets and the between-class sets. However, these methods do not consider the problem of multimodal recognition or feature level fusion. Haghighat et al. [20] proposed DCA which incorporates the class structure, i.e., memberships of the samples in classes, into the correlation analysis. Additionally, Su et al. [14] proposed MPECCA for multi-view feature learning, which is shown as follows:

    maxu,v,w(χ)j,w(y)ruT(Ni=1Mj=1Mr=1(w(x)jw(y)r)XS(x)ijLiS(y)TirYT)v s.t. uTSwxu=1,vTSwyv=1Mj=1w(x)j=1,w(x)j (2.5)

    where u and v means correlation projection matrices. Considering combining LDA and CCA, CECCA was proposed [17]. The optimization objective of CECCA was shown as follows:

    \begin{equation} \begin{aligned} \max _{w_{x}, w_{y}} & \boldsymbol{w}_{x}^{\mathrm{T}} \boldsymbol{X}(\boldsymbol{I}+2 \boldsymbol{A}) \boldsymbol{Y}^{\mathrm{T}} \boldsymbol{w}_{y}+\boldsymbol{w}_{x}^{\mathrm{T}} \boldsymbol{X} \boldsymbol{A} \boldsymbol{T}^{\mathrm{T}} \boldsymbol{w}_{x}+\boldsymbol{w}_{y}^{\mathrm{T}} \boldsymbol{Y} \boldsymbol{A} \boldsymbol{X}^{\mathrm{T}} \boldsymbol{w}_{y} \\ \text { s. t. }& \boldsymbol{w}_{x}^{\mathrm{T}} \boldsymbol{X} \boldsymbol{X}^{\mathrm{T}} \boldsymbol{w}_{x}+\boldsymbol{w}_{y}^{\mathrm{T}} \boldsymbol{Y} \boldsymbol{Y}^{\mathrm{T}} \boldsymbol{w}_{y} = 2 \end{aligned} \end{equation} (2.6)

    where \boldsymbol{A} = 2\boldsymbol{U}-\boldsymbol{I} , \boldsymbol{I} means Identity matrix. On the basis of CCA, CECCA combined with discriminant analysis to realize the joint optimization of correlation and discriminant of combined features, which makes the extracted features more suitable for classification. However, these methods cannot achieve the closed form solution. CDCA [26] combined GMML and discrimative CCA and then achieve the closed form solution in Riemannian manifold space, the optimization objective was shown as follows:

    \begin{equation} \begin{aligned} \min _{\bf{A} \succ \bf{0}} \gamma \operatorname{tr}(\boldsymbol{A} \boldsymbol{C})+(1-\gamma)\left(\operatorname{tr}\left(\boldsymbol{A} \boldsymbol{S}_{Z}\right)+\operatorname{tr}\left(\boldsymbol{A}^{-1} \boldsymbol{D}_{Z}\right)\right) = \\ \operatorname{tr}\left(\boldsymbol{A}\left(\gamma \boldsymbol{C}+(1-\gamma) \boldsymbol{S}_{Z}\right)\right)+\operatorname{tr}\left(\boldsymbol{A}^{-1}(1-\gamma) \boldsymbol{D}_{Z}\right) \end{aligned} \end{equation} (2.7)

    From Eq (2.7) and CDCA [26] we can find with the help of discrimative part and closed form solution, the multi-view learning will easily get the the global optimality of solutions and achieve a good result.

    CCA suffer from three main problems: (1) the similarity and dissimilarity across views are not modeled; (2) although the data labels can be exploited by imposing supervised constraints, their objective functions are nonconvex; (3) the cross-view correlations are modeled in Euclidean space through RKHS kernel transformation [30,31] whose discriminating ability is obviously limited.

    We present a novel cross-view learning model, called DCLMP, in which not only the with-class and between-class scatters are characterized, but also the similarity and dissimilarity of the training data across views are modelled for utilization. Although many preferable characteristics are incorporated in DCLMP, it still suffers from non-convexity for its objective function. To facilitate pursuing global optimal solutions, we further remodel DCLMP to the Riemannian manifold space to make the objective function convex. The proposed method is named as C-DCLMP.

    Assume we are given N training instances sampled from K classes with two views of feature representations, i.e., {\textbf X} = [{\textbf X}_1, {\textbf X}_2, \cdot \cdot \cdot, {\textbf X}_K] \in \mathbb{R}^{p\times N} with {\textbf X}_k = [{\textbf x}_1^k, {\textbf x}_2^k, \cdot \cdot \cdot, {\textbf x}_{N_k}^k] being N_k x-view instances from the k -th class and {\textbf Y} = [{\textbf Y}_1, {\textbf Y}_2, \cdot \cdot \cdot, {\textbf Y}_K] \in \mathbb{R}^{q\times N} with {\textbf Y}_k = [{\textbf y}_1^k, {\textbf y}_2^k, \cdot \cdot \cdot, {\textbf y}_{N_k}^k] being N_k y-view instances from the k -th class, where {\textbf y}_1^k and {\textbf x}_1^k stand for two view representations from the same instance. In order to concatenate them for subsequent classification, we denote U \in \mathbb{R}^{p\times r} and V \in \mathbb{R}^{q\times r} as projection matrices for the two views to transform their representations to a r -dimensional common space.

    To perform cross-view learning while exploring supervision knowledge in terms of similar and dissimilar relationships among instances in each view and across the views, as well as sample distribution manifolds, we construct DCLMP. To this end, we should construct the model by taking into account the following aspects: 1) distances between similar instances from the same class should be reduced while those among dissimilar from different classes should be enlarged, in levels of intra-view and inter-view; 2) manifold structures embedded in similar and dissimilar instances should be preserved. These modelling considerations are intuitively demonstrated in Figure 1.

    Figure 1.  Modelling strategy of DCLMP. Here, circular and triangular shapes represent samples from two different classes, while filling in different colors represents different view representations. The samples are distributed dispersedly in original feature representation space (a); however, in DCLMP projection space (b), similar samples are pushed nearer while dissimilar from different classes are pulled apart from each other, while their manifold relations are preserved.

    Along this line, we construct the objective function of DCLMP as follows:

    \begin{equation} { \begin{split} & \min\limits_{\{{\textbf U}, {\textbf V}\}} \; \frac{1}{N}\sum\limits_{i = 1}^{N} \frac{1}{N} \sum\limits_{j = 1}^{N}\|{\textbf U}^T{\textbf x}_i-{\textbf V}^T{\textbf y}_j\|_2^2\cdot {\textbf L}_{ij}\\ & \quad + \frac{\lambda_1}{K}\sum\limits_{k = 1}^{K}\frac{1}{N_k}\sum\limits_{i = 1}^{N_k}\frac{1}{k_{n}}\sum\limits_{j = 1}^{k_{n}}\Bigg\{\|{\textbf U}^T{\textbf x}_i^k-{\textbf U}^T{\textbf x}_j^k\|_F^2\cdot {\textbf S}_{ij}^{w_x} \\ & \qquad + \|{\textbf V}^T{\textbf y}_i^k-{\textbf V}^T{\textbf y}_j^k\|_F^2\cdot {\textbf S}_{ij}^{w_y}\Bigg\} \\ & \quad - \frac{\lambda_2}{K}\sum\limits_{k = 1}^K\sum\limits_{h\neq k}\frac{1}{N_k}\sum\limits_{i = 1}^{N_k}\frac{1}{k_{n}}\sum\limits_{j = 1}^{k_{n}}\Bigg\{\|{\textbf U}^T{\textbf x}_i^k-{\textbf U}^T{\textbf x}_j^h\|_F^2\cdot {\textbf S}_{ij}^{b_x}\\ & \qquad + \|{\textbf V}^T{\textbf y}_i^k - {\textbf V}^T{\textbf y}_j^h\|_F^2\cdot {\textbf S}_{ij}^{b_y}\Bigg\} \end{split} } \end{equation} (3.1)

    where {\textbf U} and {\textbf V} denote the projection matrices in the r -dimensional common space of two views and k_{n} denotes the k -nearest neighbors of an instance. {\textbf L} is the discriminative weighting matrix. {\textbf S}^{w_x} and {\textbf S}^{w_y} stand for the within-class manifold weighting matrices of two different views of feature representations, and {\textbf S}^{b_x} and {\textbf S}^{b_y} stand for the between-class manifold weighting matrices of two different views of feature representations. Their elements are defined as follows

    \begin{equation} { \begin{split} & {\textbf L}_{ij} = \begin{cases} \frac{1}{N_k}& {\textbf x}_i \ \text{and}\ {\textbf y}_j \ \text{are from the same class}\\ 0& {\textbf x}_i\ \text{and}\ {\textbf y}_j\ \text{are from different classes} \end{cases} \end{split} } \end{equation} (3.2)
    \begin{equation} { \begin{split} & {\textbf S}_{ij}^{w_x} = \begin{cases} exp\left(-\frac{\|{\textbf x}_i^k-{\textbf x}_j^k\|^2}{\sigma_x^2}\right)& { {\textbf x}_j^k \in {\rm{ KNN}} _{k_n} ( {\textbf x}_i^k ) }\\ 0 & { {\textbf x}_j^k \notin {\rm{KNN}} _{k_n} ( {\textbf x}_i^k ) } \end{cases} \end{split} } \end{equation} (3.3)
    \begin{equation} { \begin{split} & {\textbf S}_{ij}^{w_y} = \begin{cases} exp\left(-\frac{\|{\textbf y}_i^k-{\textbf y}_j^k\|^2}{\sigma_y^2}\right)& { {\textbf y}_j^k \in {\rm{KNN}} _{k_n} ( {\textbf y}_i^k ) }\\ 0 & { {\textbf y}_j^k \notin {\rm{KNN}} _{k_n} ( {\textbf y}_i^k ) } \end{cases} \end{split} } \end{equation} (3.4)
    \begin{equation} { \begin{split} & {\textbf S}_{ij}^{b_x} = \begin{cases} exp\left(-\frac{\|{\textbf x}_i^k-{\textbf x}_j^h\|^2}{\sigma_x^2}\right)& { {\textbf x}_j^h \in {\rm{KNN}} _{k_n} ( {\textbf x}_i^k ) }\\ 0 & { {\textbf x}_j^h \notin {\rm{KNN}} _{k_n} ( {\textbf x}_i^k ) } \end{cases} \end{split} } \end{equation} (3.5)
    \begin{equation} { \begin{split} & {\textbf S}_{ij}^{b_y} = \begin{cases} exp\left(-\frac{\|{\textbf y}_i^k-{\textbf y}_j^h\|^2}{\sigma_y^2}\right)& { {\textbf y}_j^h \in {\rm{KNN}} _{k_n} ( {\textbf y}_i^k ) }\\ 0 & { {\textbf y}_j^h \notin {\rm{KNN}} _{k_n} ( {\textbf y}_i^k ) } \end{cases} \end{split} } \end{equation} (3.6)

    where KNN _{k_n} denotes the k_n -nearest neighbors of an instance. \sigma_x and \sigma_y stand for width coefficients to normalize the weights.

    In Eq (3.1), the first part characterizes the cross-view similarity and dissimilarity discriminations, the second part preserves the manifold relationships within each class scatters, while the third part magnifies the distribution margins for a dissimilar pair of instances. In this way, both the discriminative information and the manifold distributions can be modelled in a joint objective function.

    For convenience of solving Eq (3.1), we transform it as the following concise form

    \begin{equation} { \begin{split} & \min\limits_{{\textbf A} \succ 0} \; tr\big({\textbf A}({\textbf C}+\lambda_1 {\textbf S}_z - \lambda_2 {\textbf D}_z)\big) \end{split} } \end{equation} (3.7)

    with

    \begin{equation} { \begin{split} & {\textbf A} = \left[ \begin{aligned} {\textbf U} \\ {\textbf V} \end{aligned} \right] \left[ \begin{aligned} {\textbf U} \\ {\textbf V} \end{aligned} \right]^T \end{split} } \end{equation} (3.8)
    \begin{equation} { \begin{split} & {\textbf C} = \left[ \begin{aligned} {\textbf 1}_{p\times p} \\ {\textbf 0}_{q\times p} \end{aligned} \right] {\textbf X}{\textbf M}^L{\textbf X}^T[{\textbf 0}_{p\times q}, {\textbf 1}_{q\times q}]+ \left[ \begin{aligned} {\textbf 0}_{p\times q} \\ {\textbf 1}_{q\times q} \end{aligned} \right] {\textbf Y}{\textbf M}^L{\textbf Y}^T[{\textbf 0}_{q\times p}, {\textbf 1}_{q\times q}]-\\ & \qquad \left[ \begin{aligned} {\textbf 1}_{p\times p} \\ {\textbf 0}_{q\times p} \end{aligned} \right] {\textbf X}{\textbf L}{\textbf Y}^T[{\textbf 0}_{q\times p}, {\textbf 1}_{q\times q}]- \left[ \begin{aligned} {\textbf 0}_{p\times q} \\ {\textbf 1}_{q\times q} \end{aligned} \right] {\textbf Y}{\textbf L}{\textbf X}^T[{\textbf 1}_{p\times p}, {\textbf 0}_{p\times q}] \end{split} } \end{equation} (3.9)
    \begin{equation} { \begin{split} & {\textbf S}_z = \sum\limits_{k = 1}^{K}\Bigg\{\left[ \begin{aligned} {\textbf 1}_{p\times p} \\ {\textbf 0}_{q\times p} \end{aligned} \right]{\textbf X}^k\left({\textbf M}^{w_x}+{{\textbf M}^{w_x}}^T-{\textbf S}^{w_x}-{{\textbf S}^{w_x}}^T\right){{\textbf X}^k}^T[{\textbf 1}_{p\times p}, {\textbf 0}_{p\times q}]\\ & \qquad + \left[ \begin{aligned} {\textbf 0}_{p\times q} \\ {\textbf 1}_{q\times q} \end{aligned} \right]{\textbf Y}^k\left({\textbf M}^{w_y}+{{\textbf M}^{w_y}}^T-{\textbf S}^{w_y}-{{\textbf S}^{w_y}}^T\right){{\textbf Y}^k}^T[{\textbf 0}_{q\times p}, {\textbf 1}_{q\times q}] \Bigg\} \end{split} } \end{equation} (3.10)
    \begin{equation} { \begin{split} & {\textbf D}_z = \Bigg\{\left[ \begin{aligned} {\textbf 1}_{p\times p} \\ {\textbf 0}_{q\times p} \end{aligned} \right]{\textbf X}\left({\textbf M}^{b_x}+{{\textbf M}^{b_x}}^T-{\textbf S}^{b_x}-{{\textbf S}^{b_x}}^T\right){\textbf X}^T[{\textbf 1}_{p\times p}, {\textbf 0}_{p\times q}]\\ & \qquad + \left[ \begin{aligned} {\textbf 0}_{p\times q} \\ {\textbf 1}_{q\times q} \end{aligned} \right]{\textbf Y}\left({\textbf M}^{b_y}+{{\textbf M}^{b_y}}^T-{\textbf S}^{b_y}-{{\textbf S}^{b_y}}^T\right){\textbf Y}^T[{\textbf 0}_{q\times p}, {\textbf 1}_{q\times q}] \Bigg\} \end{split} } \end{equation} (3.11)

    where {\textbf M}^{w_x}_{ii} = \sum_{j = 1}^{N}{\textbf S}_{ij}^{w_x} , {\textbf M}^{w_y}_{ii} = \sum_{j = 1}^{N}{\textbf S}_{ij}^{w_y} , {\textbf M}^{b_x}_{ii} = \sum_{j = 1}^{N}{\textbf S}_{ij}^{b_x} , {\textbf M}^{b_y}_{ii} = \sum_{j = 1}^{N}{\textbf S}_{ij}^{b_y} , and {\textbf M}^{L}_{ii} = \sum_{j = 1}^{N}{\textbf L}_{ij} .

    We let \mathcal{J} record the objective function value of Eq (3.7) and introduce {\textbf Q}^T{\textbf Q} = {\textbf I} to replace {\textbf A} and {\Lambda} to rewrite Eq (3.7) as

    \begin{equation} { \begin{split} & \mathcal{J}_{\{{\textbf Q}, {\Lambda}\}} = tr\big({\textbf Q}^T\left({\textbf C}+\lambda_1 {\textbf S}_z - \lambda_2 {\textbf D}_z\right){\textbf Q}\big) - tr\big({\Lambda}({\textbf Q}^T{\textbf Q} - {\textbf I})\big). \end{split} } \end{equation} (3.12)

    Calculating the partial derivative of \mathcal{J}_{\{{\textbf Q}, {\Lambda}\}} with regard to Q and making it to zero yields

    \begin{equation} { \begin{split} & ({\textbf C}+\lambda_1 {\textbf S}_z - \lambda_2 {\textbf D}_z){\textbf Q} = {\textbf Q}{\Lambda}, \end{split} } \end{equation} (3.13)

    The projection matrix {\textbf Q} can be obtained by calculating a required number of smallest eigenvectors of {\textbf C}+\lambda_1 {\textbf S}_z - \lambda_2 {\textbf D}_z . Finally, we can recover {\textbf A} = {\textbf Q}{\textbf Q}^T . Then, {\textbf U} and {\textbf V} can be obtained through Eq (3.8).

    We find that such a objective function may be not convex [32,33]. The separability of nonlinear data patterns in the geodesic space can be significantly improved and thus benefits their subsequent recognitions. Referring to \min_{{\textbf A}\succ 0} \; tr({\textbf A}^{-1}\bullet) \Leftrightarrow \max_{{\textbf A} \succ 0} \; tr({\textbf A}\bullet) [34], we reformulate DCLMP in (3.7) equivalently as

    \begin{equation} { \begin{split} & \min\limits_{{\textbf A} \succ 0} \; tr\big({\textbf A}{\textbf C}+\lambda_1 {\textbf A}{\textbf S}_z + \lambda_2{\textbf A}^{-1}{\textbf D}_z\big) \Leftrightarrow \min\limits_{{\textbf A} \succ 0} \; tr({\textbf A}{\textbf C})+\lambda_1tr({\textbf A}{\textbf S}_z) + \lambda_2tr({\textbf A}^{-1}{\textbf D}_z), \end{split} } \end{equation} (3.14)

    Minimizing the third term \lambda_1 tr({\textbf A}^{-1}{\textbf D}_z) is equivalent to minimizing -\lambda_1 tr\left({\textbf AD}_z\right) of Eq (3.7). Although the last term is nonlinear, it is defined in the convex cone space [35] and thus is still convex. As a result, Eq (3.14) is entirely convex regarding A . It enjoys closed-form solution [36,37,38]. To distinguish Eq (3.14) from DCLMP, we call it C-DCLMP.

    For convenience of deriving the closed-form solution, we reformulate Eq (3.14) as

    \begin{equation} { \begin{split} & \min\limits_{{\textbf A} \succ 0} \; \gamma tr({\textbf A}^{-1}{\textbf D}_z) + (1-\gamma)\left(tr({\textbf AS}_z\right) + \alpha tr\left({\textbf AC})\right), \end{split} } \end{equation} (3.15)

    where we set \gamma \in (0, 1) [34]. Let J({\textbf A}): = \gamma tr({\textbf A}^{-1}{\textbf D}_z) + (1-\gamma)\left(tr\left({\textbf AS}_z\right) + \alpha tr\left({\textbf AC}\right)\right) .

    \begin{equation} { \begin{split} & (1-\gamma){\textbf A}({\textbf S}_z+\alpha{\textbf C}){\textbf A} = \gamma {\textbf D}_z, \\ \end{split} } \end{equation} (3.16)

    whose solution is the midpoint of the geodesic jointing ((1-\gamma)({\textbf S}_z+\alpha{\textbf C}))^{-1} and \gamma{\textbf D}_z , that is

    \begin{equation} { \begin{split} & {\textbf A} = \big((1-\gamma)({\textbf S}_z+\alpha{\textbf C})\big)^{-1}\sharp_{1/2}(\gamma {\textbf D}_z), \\ \end{split} } \end{equation} (3.17)

    (\cdot)\sharp_{1/2}(\cdot) denotes the midpoint. We extend the geodesic mean solution (3.17) to the geodesic space by replacing (\cdot)\sharp_{1/2}(\cdot) with (\cdot)\sharp_{t}(\cdot) , 0\leqslant t\leqslant 1 .

    We add a regularizer with prior knowledge to (3.15). Here, we incorporate symmetrized LogDet divergence and consequently (3.15) becomes

    \begin{equation} { \begin{split} & \min\limits_{{\textbf A}\succ 0} \; \gamma tr({\textbf A}^{-1}{\textbf D}_z) + (1-\gamma)(tr\left({\textbf AS}_z\right) + \alpha tr\left({\textbf AC}\right)) \\ & \qquad + \lambda D_{sld}({\textbf A}, {\textbf A}_0), \end{split} } \end{equation} (3.18)
    \begin{equation} { \begin{split} & D_{sld}({\textbf A}, {\textbf A}_0) \; = tr({\textbf AA}_0^{-1})+tr({\textbf A}^{-1}{\textbf A}_0)-2(p+q), \end{split} } \end{equation} (3.19)

    where (p + q) is the dimension of the data. Fortunately, complying with the definition of geometric mean [36], Eq (3.18) is still convex. We let G({\textbf A}): = \gamma tr({\textbf A}^{-1}{\textbf D}_z) + (1-\gamma)(tr\left({\textbf AS}_z\right) + \alpha tr\left({\textbf AC}\right)) + \lambda D_{sld}({\textbf A}, {\textbf A}_0) . Then we set the gradient of G({\textbf A}) regarding to {\textbf A} to zero and obtain the equation as

    \begin{equation} { \begin{split} & (1-\gamma){\textbf A}({\textbf S}_z+\alpha{\textbf C}){\textbf A}+\lambda {\textbf A}{\textbf A}_0^{-1}{\textbf A} = \gamma {\textbf D}_z+\lambda {\textbf A}_0, \\ \end{split} } \end{equation} (3.20)

    we calculate the closed-form solution as

    \begin{equation} { \begin{split} & {\textbf A} = ((1-\gamma)({\textbf S}_z+\alpha{\textbf C})+\lambda {\textbf A}_0^{-1})^{-1}\sharp_{t}(\gamma {\textbf D}_z+\lambda {\textbf A}_0).\\ \end{split} } \end{equation} (3.21)

    More precisely, according to the definition of (\cdot)\sharp_{t}(\cdot) , namely the geodesic mean jointing two matrices, we can directly expand the final solution of our C-DCLMP in Eq (3.18) as

    \begin{equation} { \begin{split} & {\textbf A} = ((1-\gamma)({\textbf S}_z+\alpha{\textbf C})+\lambda {\textbf A}_0^{-1})^{-1}\sharp_{t}(\gamma {\textbf D}_z+\lambda {\textbf A}_0)\\ & \; \; = \left((1-\gamma)({\textbf S}_z+\alpha{\textbf C})+\lambda {\textbf A}_0^{-1}\right)^{1/2}\\ & \quad \Big(\left((1-\gamma)({\textbf S}_z+\alpha{\textbf C})+\lambda {\textbf A}_0^{-1}\right)^{-1/2}(\gamma {\textbf D}_z+\lambda {\textbf A}_0)\\ & \quad \left((1-\gamma)({\textbf S}_z+\alpha{\textbf C})+\lambda {\textbf A}_0^{-1}\right)^{-1/2}\Big)^{t}\\ & \quad \left((1-\gamma)({\textbf S}_z+\alpha{\textbf C})+\lambda {\textbf A}_0^{-1}\right)^{1/2}. \end{split} } \end{equation} (3.22)

    where we set {\textbf A}_0 to be a (p + q) -order identity matrix {\textbf I}_{p+q} . When obtaining {\textbf A} , {\textbf U} and {\textbf V} are recovered.

    Its concatenated representation can be generated by {\textbf U}^T{\textbf x}+{\textbf V}^T{\textbf y} = \left[\begin{aligned} {\textbf U} \\ {\textbf V} \end{aligned} \right]^T\left[\begin{aligned} {\textbf x}\\ {\textbf y} \end{aligned} \right] and the classification decision using a classifier (e.g., KNN) can be made on this fused representation.

    To comprehensively evaluate the proposed methods, we first performed comparative experiments on several benchmark and real face datasets. Besides, we also performed sensibility analysis on the model parameters.

    For evaluation and comparisons, CCA [1], DCCA [12], MPECCA [14], CECCA [17], DCA [20] and CDCA [26] were implemented. All hyper-parameters were cross-validated in the range of [0, 0.1, ..., 1] for t and \gamma , and [1e-7, 1e-6, ..., 1e3] for \alpha and \lambda . For concatenated cross-view representations, a 5 -nearest-neighbors classifier was employed for classification. Additionally, recognition accuracy (%, higher is better) and mean absolute errors (MAE, lower is better) were adopted as performance measures.

    We first performed experiments on several widely used non-face multi-view datasets, i.e., MFD [39] and USPS [40], AWA [41] and ADNI [42]. We report the results in Table 1.

    Table 1.  Recognition accuracy (%) comparison on the non-face datasets.
    Dataset View Represenations CCA DCA MPECCA DCCA CECCA CDCA DCLMP (ours) C-DCLMP (ours)
    MFD fac fou 80.22 \pm 0.9 80.00 \pm 0.2 90.64 \pm 1.3 95.15 \pm 0.9 96.46 \pm 2.4 98.11 \pm 0.3 94.49 \pm 1.7 98.03 \pm 0.3
    fac kar 92.12 \pm 0.5 90.10 \pm 0.8 95.39 \pm 0.6 95.33 \pm 0.7 96.52 \pm 1.2 97.06 \pm 0.4 96.86 \pm 0.5 97.93 \pm 0.6
    fac mor 78.22 \pm 0.8 63.22 \pm 4.3 72.32 \pm 2.4 95.22 \pm 0.9 94.23 \pm 1.0 98.13 \pm 0.3 90.97 \pm 2.5 97.63 \pm 0.3
    fac pix 83.02 \pm 1.2 90.20 \pm 0.5 94.65 \pm 0.5 65.60 \pm 1.1 93.67 \pm 2.9 97.52 \pm 0.4 97.45 \pm 0.5 97.21 \pm 0.4
    fac zer 84.00 \pm 0.6 71.50 \pm 2.2 93.79 \pm 0.7 96.00 \pm 0.6 97.04 \pm 0.6 97.03 \pm 0.4 95.98 \pm 0.3 97.75 \pm 0.4
    fou kar 90.11 \pm 1.0 75.42 \pm 5.6 93.98 \pm 0.4 89.12 \pm 4.3 96.90 \pm 0.5 97.19 \pm 0.6 97.45 \pm 0.4 97.45 \pm 0.3
    fou mor 70.22 \pm 0.4 55.82 \pm 4.6 60.62 \pm 1.6 82.30 \pm 0.9 78.25 \pm 0.6 83.81 \pm 0.7 82.09 \pm 1.0 84.80 \pm 0.6
    fou pix 68.44 \pm 0.4 76.10 \pm 4.7 78.24 \pm 1.1 90.41 \pm 3.2 76.28 \pm 1.3 96.11 \pm 0.5 97.62 \pm 0.4 97.74 \pm 0.3
    fou zer 74.10 \pm 0.9 62.80 \pm 4.1 79.38 \pm 1.2 79.53 \pm 4.5 83.16 \pm 1.4 85.98 \pm 0.9 85.33 \pm 1.1 86.56 \pm 1.0
    kar mor 64.09 \pm 0.6 82.00 \pm 1.6 72.92 \pm 2.7 91.95 \pm 2.8 91.89 \pm 0.6 97.28 \pm 0.5 96.83 \pm 0.5 97.14 \pm 0.4
    kar pix 88.37 \pm 0.9 88.85 \pm 0.8 95.07 \pm 0.6 92.59 \pm 2.0 95.98 \pm 0.3 94.68 \pm 0.5 97.54 \pm 0.4 97.31 \pm 0.5
    kar zer 90.77 \pm 1.0 75.97 \pm 2.8 94.17 \pm 0.6 88.47 \pm 2.9 93.57 \pm 0.9 96.69 \pm 0.4 96.98 \pm 0.4 97.42 \pm 0.4
    mor pix 68.66 \pm 1.5 82.01 \pm 2.1 67.21 \pm 2.3 93.04 \pm 0.7 90.08 \pm 1.0 96.89 \pm 0.4 97.20 \pm 0.5 97.19 \pm 0.4
    mor zer 73.22 \pm 0.6 50.35 \pm 1.8 60.95 \pm 1.4 84.55 \pm 0.9 80.59 \pm 0.9 84.19 \pm 0.8 81.75 \pm 1.1 84.29 \pm 0.7
    pix zer 82.46 \pm 0.6 71.16 \pm 2.8 82.81 \pm 1.2 91.67 \pm 2.1 91.81 \pm 1.2 96.30 \pm 0.5 97.35 \pm 0.5 97.30 \pm 0.5
    AWA cq lss 73.11 \pm 2.1 62.08 \pm 0.3 76.19 \pm 1.0 70.51 \pm 1.3 77.53 \pm 1.7 87.80 \pm 2.8 89.03 \pm 1.4 89.80 \pm 1.2
    cq phog 65.21 \pm 1.4 73.10 \pm 1.2 72.42 \pm 1.6 70.15 \pm 0.9 74.51 \pm 2.1 85.58 \pm 2.7 86.71 \pm 2.3 86.81 \pm 1.2
    cq rgsift 60.22 \pm 1.3 61.40 \pm 1.7 78.04 \pm 1.3 82.87 \pm 2.4 82.83 \pm 1.4 90.99 \pm 3.0 93.44 \pm 0.6 94.34 \pm 0.8
    cq sift 74.33 \pm 1.3 61.28 \pm 1.9 77.85 \pm 1.4 83.19 \pm 2.1 80.05 \pm 1.7 81.59 \pm 5.2 87.17 \pm 0.8 90.68 \pm 0.8
    cq surf 75.86 \pm 1.7 69.30 \pm 2.1 79.07 \pm 0.8 73.55 \pm 2.3 81.59 \pm 1.5 93.58 \pm 1.1 94.36 \pm 1.0 95.35 \pm 0.5
    lss phog 69.96 \pm 1.7 59.72 \pm 0.2 68.12 \pm 1.2 64.86 \pm 2.6 71.36 \pm 1.4 80.48 \pm 2.0 81.76 \pm 1.1 81.62 \pm 1.1
    lss rgsift 78.65 \pm 0.9 63.21 \pm 1.3 73.64 \pm 1.0 78.28 \pm 2.8 77.28 \pm 1.4 87.38 \pm 4.3 90.13 \pm 0.7 89.95 \pm 1.0
    lss sift 73.49 \pm 1.0 65.72 \pm 2.1 73.12 \pm 1.4 66.21 \pm 1.6 76.69 \pm 1.7 81.56 \pm 2.4 84.05 \pm 0.9 84.07 \pm 1.9
    lss surf 76.30 \pm 1.4 65.33 \pm 1.8 74.84 \pm 1.6 79.06 \pm 2.8 78.52 \pm 1.3 89.81 \pm 2.5 89.75 \pm 0.8 91.12 \pm 0.7
    phog rgsift 68.18 \pm 1.1 48.38 \pm 1.0 69.49 \pm 2.3 77.37 \pm 1.5 74.41 \pm 1.5 82.76 \pm 1.1 83.57 \pm 1.6 83.68 \pm 1.2
    phog sift 68.26 \pm 1.1 70.24 \pm 1.1 68.97 \pm 1.3 63.16 \pm 1.3 72.14 \pm 1.5 80.50 \pm 1.2 83.57 \pm 1.1 83.75 \pm 1.5
    phog surf 64.57 \pm 1.4 56.94 \pm 0.5 71.55 \pm 1.4 75.68 \pm 1.9 74.43 \pm 2.1 84.97 \pm 2.6 88.02 \pm 1.8 87.34 \pm 0.8
    rgsift sift 71.35 \pm 1.3 58.56 \pm 2.3 72.85 \pm 1.1 75.28 \pm 2.5 76.69 \pm 1.7 90.76 \pm 2.2 93.44 \pm 0.4 93.79 \pm 1.0
    rgsift surf 75.55 \pm 1.3 67.22 \pm 1.6 76.94 \pm 2.2 84.10 \pm 2.4 80.46 \pm 1.7 93.25 \pm 1.2 92.82 \pm 0.8 93.66 \pm 0.8
    sift surf 75.33 \pm 1.3 63.36 \pm 1.6 74.27 \pm 1.2 82.14 \pm 2.7 75.51 \pm 1.1 90.07 \pm 3.4 90.67 \pm 1.0 91.69 \pm 1.1
    ADNI AV FDG 65.47 \pm 1.8 73.28 \pm 2.1 75.28 \pm 2.6 76.25 \pm 2.1 76.26 \pm 2.5 79.59 \pm 1.9 68.64 \pm 3.3 80.86 \pm 2.1
    AV VBM 71.02 \pm 2.4 71.02 \pm 2.8 73.24 \pm 3.1 63.47 \pm 2.1 60.67 \pm 2.7 81.59 \pm 2.5 78.38 \pm 2.5 80.70 \pm 2.8
    FDG VBM 61.37 \pm 1.2 65.28 \pm 1.6 70.37 \pm 2.6 64.05 \pm 1.6 70.95 \pm 1.8 80.12 \pm 2.0 74.97 \pm 2.9 80.21 \pm 1.7
    USPS left right 62.14 \pm 0.6 80.11 \pm 1.2 66.67 \pm 0.9 63.96 \pm 2.0 82.89 \pm 1.9 89.76 \pm 0.3 96.19 \pm 0.7 96.03 \pm 0.6

     | Show Table
    DownLoad: CSV

    The proposed DCLMP method yielded the second-lowest estimation errors in most cases, slightly higher than the proposed C-DCLMP. The improvement achieved by C-DCLMP method is significant, especially on AWA and USPS datasets.

    We also conducted age estimation experiments on AgeDB [43], CACD [44] and IMDB-WIKI [45]. These three databases are illustrated in Figure 2.

    Figure 2.  Face examples from (a) AgeDB, (b) CACD datasets and (c) IMDB-WIKI dataset.

    We extracted BIF [46] and HoG [47] feature vectors and reduced dimensions to 200 by PCA as two view representations. We randomly chose 50,100,150 samples for training. Also, we use VGG19 [48] and Resnet50 [45] to extract deep feature vectors from AgeDB, CACD and IMDB-WIKI databases. We report results in Tables 3, 5 and 6.

    Table 2.  Age estimation results (MAE \pm STD) on AgeDB.
    training samples CCA DCA MPECCA DCCA CECCA CDCA DCLMP (ours) C-DCLMP (ours)
    50 17.70 \pm 0.5 17.78 \pm 0.5 16.10 \pm 0.4 15.93 \pm 0.4 15.62 \pm 0.5 15.48 \pm 0.2 15.59 \pm 0.1 15.16 \pm 0.4
    100 16.81 \pm 0.5 17.23 \pm 0.6 14.74 \pm 0.5 14.79 \pm 0.5 14.67 \pm 0.4 14.57 \pm 0.2 14.60 \pm 0.2 14.13 \pm 0.2
    150 15.43 \pm 0.5 16.25 \pm 0.6 13.83 \pm 0.5 13.49 \pm 0.4 13.43 \pm 0.4 13.21 \pm 0.2 13.48 \pm 0.2 13.19 \pm 0.3

     | Show Table
    DownLoad: CSV
    Table 3.  Age estimation results (MAE \pm STD) on AgeDB (deep features).
    training samples CCA DCA MPECCA DCCA CECCA CDCA DCLMP (ours) C-DCLMP (ours)
    50 16.17 \pm 0.5 16.27 \pm 0.46 15.42 \pm 0.5 15.09 \pm 0.5 14.78 \pm 0.5 14.67 \pm 0.3 14.75 \pm 0.2 14.52 \pm 0.2
    100 15.86 \pm 0.5 15.79 \pm 0.6 14.89 \pm 0.8 14.23 \pm 0.4 14.09 \pm 0.4 13.78 \pm 0.3 14.07 \pm 0.3 13.68 \pm 0.2
    150 15.09 \pm 0.5 14.81 \pm 0.3 13.97 \pm 0.5 13.41 \pm 0.6 13.34 \pm 0.5 13.16 \pm 0.3 13.46 \pm 0.3 13.15 \pm 0.3

     | Show Table
    DownLoad: CSV
    Table 4.  Age estimation results (MAE \pm STD) on CACD.
    training samples CCA DCA MPECCA DCCA CECCA CDCA DCLMP (ours) C-DCLMP (ours)
    50 16.28 \pm 0.5 16.78 \pm 0.4 15.79 \pm 0.4 14.98 \pm 0.4 14.38 \pm 0.4 14.28 \pm 0.4 14.10 \pm 0.3 13.95 \pm 0.3
    100 15.45 \pm 0.4 16.52 \pm 0.5 15.04 \pm 0.5 14.44 \pm 0.4 13.99 \pm 0.4 13.98 \pm 0.3 13.85 \pm 0.2 13.74 \pm 0.2
    150 15.20 \pm 0.5 15.41 \pm 0.5 14.79 \pm 0.4 14.02 \pm 0.5 13.73 \pm 0.5 13.79 \pm 0.2 13.67 \pm 0.1 13.63 \pm 0.3

     | Show Table
    DownLoad: CSV
    Table 5.  Age estimation results (MAE \pm STD) on CACD (deep features).
    training samples CCA DCA MPECCA DCCA CECCA CDCA DCLMP (ours) C-DCLMP (ours)
    50 16.07 \pm 0.6 16.27 \pm 0.4 15.35 \pm 0.4 14.21 \pm 0.6 13.39 \pm 0.4 13.49 \pm 0.3 13.52 \pm 0.3 13.27 \pm 0.2
    100 15.69 \pm 0.5 15.75 \pm 0.3 14.65 \pm 0.5 14.17 \pm 0.5 13.28 \pm 0.3 13.26 \pm 0.3 13.24 \pm 0.2 12.97 \pm 0.4
    150 15.22 \pm 0.4 15.32 \pm 0.4 14.45 \pm 0.3 14.01 \pm 0.6 13.01 \pm 0.3 12.94 \pm 0.3 12.91 \pm 0.3 12.76 \pm 0.4

     | Show Table
    DownLoad: CSV
    Table 6.  Age estimation results (MAE \pm STD) on IMDB-WIKI.
    training samples CCA DCA MPECCA DCCA CECCA CDCA DCLMP (ours) C-DCLMP (ours)
    50 14.29 \pm 0.5 14.39 \pm 0.5 13.49 \pm 0.3 13.04 \pm 0.5 12.26 \pm 0.4 12.37 \pm 0.3 11.84 \pm 0.3 11.65 \pm 0.3
    100 13.97 \pm 0.5 13.87 \pm 0.4 12.79 \pm 0.5 12.35 \pm 0.3 11.96 \pm 0.3 11.86 \pm 0.3 11.53 \pm 0.2 11.13 \pm 0.3
    150 13.43 \pm 0.5 13.56 \pm 0.5 12.48 \pm 0.3 12.26 \pm 0.4 11.66 \pm 0.3 11.65 \pm 0.3 11.45 \pm 0.2 10.98 \pm 0.3

     | Show Table
    DownLoad: CSV

    The estimation errors (MAEs) of all the methods reduced monotonically. The age MAEs of DCLMP are the second lowest, demonstrating the solidness of our modelling cross-view discriminative knowledge and data manifold structures. We can also observe that C-DCLMP yields the lowest estimation errors, demonstrating its effectiveness and superiority.

    For the proposed methods, we performed parameter analysis t , \gamma and \lambda involved in (3.21), respectively. Specifically, we conducted age estimation experiments on both AgeDB and CACD. The results are plotted in Figures 35.

    Figure 3.  Age estimation MAE on AgeDB (left) and CACD (right) with varying t .
    Figure 4.  Age estimation MAE on AgeDB (left) and CACD (right) with varying \gamma .
    Figure 5.  Age estimation MAE on AgeDB (left) and CACD (right) with varying \lambda .

    Geometric weighting parameter t of C-DCLMP: We find some interesting observations from Figure 3. That is, with t increasing from 0 to 1, the estimation error descended first and then rose again. It shows that the similar manifolds within class and the inter-class data distributions are helpful in regularizing the model solution space.

    Metric balance parameter \gamma of C-DCLMP: We can observe from Figure 4 that, the age estimation error (MAE) achieved the lowest values when 0.1 < \gamma < 0.9 . This observation illustrates that preserving the data cross-view discriminative knowledge and the manifold distributions is useful and helps improve the estimation precision.

    Metric prior parameter \lambda of C-DCLMP: Figure 5 shows that, with increased \lambda value, age estimation error descended to its lowest around \lambda = 1e-1 and then increased steeply. It demonstrates that incorporating moderate metric prior knowledge can regularize the model solution positively, but excess prior knowledge may dominate the entire data rule and mislead the training of the model.

    For the proposed methods and the comparison methods mentioned above, we performed time complexity analysis. Specifically, we conducted age estimation experiments on both AgeDB and CACD by choosing 100 samples from each class for training while taking the rest for testing, respectively. We reported the averaged results in Table 7.

    Table 7.  Running time results (MAE \pm STD) on AgeDB and CACD.
    Dataset CCA DCA MPECCA DCCA CECCA CDCA DCLMP (ours) C-DCLMP (ours)
    AgeDB 0.10 \pm 0.12 0.06 \pm 0.05 0.41 \pm 0.03 0.18 \pm 0.03 0.52 \pm 0.02 0.11 \pm 0.10 57.74 \pm 0.71 54.94 \pm 0.32
    CACD 0.09 \pm 0.13 0.06 \pm 0.10 0.38 \pm 0.09 0.15 \pm 0.04 0.47 \pm 0.03 0.07 \pm 0.01 30.86 \pm 0.61 31.04 \pm 0.68

     | Show Table
    DownLoad: CSV

    For the proposed methods, we performed ablation experiments. Specifically, we conducted age estimation experiments on both AgeDB and CACD. We repeated the experiment 10 times with random data partitions and reported the averaged results in Table 8. In Table 8, each referred part corresponds to Eq (3.7).

    Table 8.  Ablation experiment results (MAE \pm STD) on AgeDB and CACD.
    Dataset First part Second part Third part C-DCLMP (ours)(ours)
    AgeDB \checkmark \checkmark 14.49 \pm 0.16
    \checkmark \checkmark 14.51 \pm 0.10
    \checkmark \checkmark 14.47 \pm 0.22
    \checkmark \checkmark \checkmark 14.18 \pm 0.32
    CACD \checkmark \checkmark 14.07 \pm 0.25
    \checkmark \checkmark 14.08 \pm 0.32
    \checkmark \checkmark 14.04 \pm 0.21
    \checkmark \checkmark \checkmark 13.73 \pm 0.24

     | Show Table
    DownLoad: CSV

    In this paper, we proposed a DCLMP, in which both the cross-view discriminative information and the spatial structural information of training data is taken into consideration to enhance subsequent decision making. To pursue closed-form solutions, we remodeled the objective of DCLMP to nonlinear geodesic space and consequently achieved its convex formulation (C-DCLMP). Finally, we evaluated the proposed methods and demonstrated their superiority on various benchmark and real face datasets. In the future, we will consider exploring the latent information of the unlabeled data from the feature and label level, and study how to combine related advanced multi-view learning methods to reduce the computational consumption of the model and further improve the generalization ability of the model in various scenarios.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by the National Natural Science Foundation of China under Grant 62176128, the Open Projects Program of State Key Laboratory for Novel Software Technology of Nanjing University under Grant KFKT2022B06, the Fundamental Research Funds for the Central Universities No. NJ2022028, the Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD) fund, as well as the Qing Lan Project of the Jiangsu Province.



    [1] C. Coll, A. Herrero, D. Ginestar, E. Sánchez, The discrete fractional order difference applied to an epidemic model with indirect transmission, Appl. Math. Model., 103 (2022), 636–648. https://doi.org/10.1016/j.apm.2021.11.002 doi: 10.1016/j.apm.2021.11.002
    [2] Y. Chu, S. Bekiros, E. Zambrano-Serrano, O. Orozco-López, S. Lahmiri, H. Jahanshahi, et al., Artificial macro-economics: a chaotic discrete-time fractional-order laboratory model, Chaos Soliton. Fract., 145 (2021), 110776. https://doi.org/10.1016/j.chaos.2021.110776 doi: 10.1016/j.chaos.2021.110776
    [3] Md. Uddin, S. Sohel Rana, S. Işık, F. Kangalgil, On the qualitative study of a discrete fractional order prey-predator model with the effects of harvesting on predator population, Chaos Soliton. Fract., 175 (2023), 113932. https://doi.org/10.1016/j.chaos.2023.113932 doi: 10.1016/j.chaos.2023.113932
    [4] K. Oprzȩdkiewicz, E. Gawin, The practical stability of the discrete, fractional order, state space model of the heat transfer process, Arch. Control Sci., 28 (2018), 463–482. https://doi.org/10.24425/acs.2018.124712 doi: 10.24425/acs.2018.124712
    [5] J. Diazt, T. Osler, Differences of fractional order, Math. Comp., 28 (1974), 185–202. https://doi.org/10.1090/s0025-5718-1974-0346352-5 doi: 10.1090/s0025-5718-1974-0346352-5
    [6] F. Atici, P. Eloe, Discrete fractional calculus with the nabla operator, Electron. J. Qual. Theory Differ. Equ., 2009 (2009), 1–12. https://doi.org/10.14232/ejqtde.2009.4.3 doi: 10.14232/ejqtde.2009.4.3
    [7] M. Wang, B. Jia, C. Chen, X. Zhu, F. Du, Discrete fractional Bihari inequality and uniqueness theorem of solutions of nabla fractional difference equations with non-Lipschitz nonlinearities, Appl. Math. Comput., 376 (2020), 125118. https://doi.org/10.1016/j.amc.2020.125118 doi: 10.1016/j.amc.2020.125118
    [8] A. Hioual, A. Ouannas, G. Grassi, T. Oussaeif, Nonlinear nabla variable-order fractional discrete systems: asymptotic stability and application to neural networks, J. Comput. Appl. Math., 423 (2023), 114939. https://doi.org/10.1016/j.cam.2022.114939 doi: 10.1016/j.cam.2022.114939
    [9] B. Liu, Uncertainty theory, 2 Eds., Berlin: Springer-Verlag, 2007. http://dx.doi.org/10.1007/978-3-540-73165-8
    [10] X. Chen, B. Liu, Existence and uniqueness theorem for uncertain differential equations, Fuzzy Optim. Decis. Making, 9 (2010), 69–81. https://doi.org/10.1007/s10700-010-9073-2 doi: 10.1007/s10700-010-9073-2
    [11] Y. Zhu, Uncertain fractional differential equations and an interest rate model, Math. Method. Appl. Sci., 38 (2015), 3359–3368. https://doi.org/10.1002/mma.3335 doi: 10.1002/mma.3335
    [12] Q. Lu, Y. Zhu, Z. Lu, Uncertain fractional forward difference equations for Riemann-Liouville type, Adv. Differ. Equ., 2019 (2019), 147. https://doi.org/10.1186/s13662-019-2093-5 doi: 10.1186/s13662-019-2093-5
    [13] P. Mohammed, A generalized uncertain fractional forward difference equations of Riemann-Liouville type, J. Math. Res., 11 (2019), 43–50. https://doi.org/10.5539/jmr.v11n4p43 doi: 10.5539/jmr.v11n4p43
    [14] Q. Lu, Y. Zhu, Comparison theorems and distributions of solutions to uncertain fractional difference equations, Comput. Appl. Math., 376 (2020), 112884. https://doi.org/10.1016/j.cam.2020.112884 doi: 10.1016/j.cam.2020.112884
    [15] P. Mohammed, T. Abdeljawad, F. Jarad, Y. Chu, Existence and uniqueness of uncertain fractional backward difference equations of Riemann-Liouville type, Math. Probl. Eng., 2020 (2020), 6598682. https://doi.org/10.1155/2020/6598682 doi: 10.1155/2020/6598682
    [16] H. Srivastava, P. Mohammed, C. Ryoo, Y. Hamed, Existence and uniqueness of a class of uncertain Liouville-Caputo fractional difference equations, J. King Saud Univ. Sci., 33 (2021), 101497. https://doi.org/10.1016/j.jksus.2021.101497. doi: 10.1016/j.jksus.2021.101497
    [17] H. Srivastava, P. Mohammed, J. Guirao, Y. Hamed, Link theorem and distributions of solutions to uncertain Liouville-Caputo difference equations, Discrete Cont. Dyn.-S, 15 (2022), 427–440. https://doi.org/10.3934/dcdss.2021083 doi: 10.3934/dcdss.2021083
    [18] Y. Zhu, Uncertain optimal control with application to a portfolio selection model, Cybernet. Syst., 41 (2010), 535–547. https://doi.org/10.1080/01969722.2010.511552 doi: 10.1080/01969722.2010.511552
    [19] L. Sheng, Y. Zhu, Optimistic value model of uncertain optimal control, Int. J. Uncertain. Fuzz., 21 (2013), 75–87. https://doi.org/10.1142/S0218488513400060 doi: 10.1142/S0218488513400060
    [20] Z. Zhang, X. Yang, Uncertain population model, Soft Comput., 24 (2020), 2417–2423. https://doi.org/10.1007/s00500-018-03678-6 doi: 10.1007/s00500-018-03678-6
    [21] C. Gao, Z. Zhang, B. Liu, Uncertain Logistic population model with Allee effect, Soft Comput., 27 (2023), 11091–11098. https://doi.org/10.1007/S00500-023-08673-0 doi: 10.1007/S00500-023-08673-0
    [22] D. Chen, Y. Liu, Uncertain Gordon-Schaefer model driven by Liu process, Appl. Math. Comput., 450 (2023), 128011. https://doi.org/10.1016/j.amc.2023.128011 doi: 10.1016/j.amc.2023.128011
    [23] Z. Liu, Uncertain growth model for the cumulative number of COVID-19 infections in China, Fuzzy Optim. Decis. Making, 20 (2021), 229–242. https://doi.org/10.1007/s10700-020-09340-x doi: 10.1007/s10700-020-09340-x
    [24] C. Ding, T. Ye, Uncertain logistic growth model for confirmed COVID-19 cases in Brazil, Journal of Uncertain Systems, 15 (2022), 2243008. https://doi.org/10.1142/S1752890922430085 doi: 10.1142/S1752890922430085
    [25] G. Evelyn Hutchinson, Circular causal systems in ecology, Ann. NY Acad. Sci., 50 (1948), 221–246. https://doi.org/10.1111/j.1749-6632.1948.tb39854.x doi: 10.1111/j.1749-6632.1948.tb39854.x
    [26] H. Merdan, Ö. Gümüş, Stability analysis of a general discrete-time population model involving delay and Allee effects, Appl. Math. Comput., 219 (2012), 1821–1832. https://doi.org/10.1016/j.amc.2012.08.021 doi: 10.1016/j.amc.2012.08.021
    [27] H. Karakaya, Ş. Kartal, İ. Öztürk, Qualitative behavior of discrete-time Caputo-Fabrizio logistic model with Allee effect, Int. J. Biomath., 17 (2024), 2350039. https://doi.org/10.1142/S1793524523500390 doi: 10.1142/S1793524523500390
    [28] T. Abdeljawad, On delta and nabla Caputo fractional differences and dual identities, Discrete Dyn. Nat. Soc., 2013 (2013), 406910. https://doi.org/10.1155/2013/406910 doi: 10.1155/2013/406910
    [29] Y. Zhu, Existence and uniqueness of the solution to uncertain fractional differential equation, J. Uncertain. Anal. Appl., 3 (2015), 5. https://doi.org/10.1186/s40467-015-0028-6 doi: 10.1186/s40467-015-0028-6
    [30] G. Wu, D. Baleanu, Discrete chaos in fractional delayed logistic maps, Nonlinear Dyn., 80 (2015), 1697–1703. https://doi.org/10.1007/s11071-014-1250-3 doi: 10.1007/s11071-014-1250-3
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(703) PDF downloads(43) Cited by(0)

Figures and Tables

Figures(4)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog