Loading [MathJax]/jax/element/mml/optable/SuppMathOperators.js
Research article

Numerical solutions to two-dimensional fourth order parabolic thin film equations using the Parabolic Monge-Ampere method

  • This article presents the Parabolic-Monge-Ampere (PMA) method for numerical solutions of two-dimensional fourth-order parabolic thin film equations with constant flux boundary conditions. We track the PMA technique, which employs special functions to acclimate and force the mesh moving associated with the physical PDE representing the thin liquid film equation. The accuracy and convergence of the PMA approach are investigated numerically using a one two-dimensional problem. Comparing the results of this method to the uniform mesh finite difference scheme, the computing effort is reduced.

    Citation: Abdulghani R. Alharbi. Numerical solutions to two-dimensional fourth order parabolic thin film equations using the Parabolic Monge-Ampere method[J]. AIMS Mathematics, 2023, 8(7): 16463-16478. doi: 10.3934/math.2023841

    Related Papers:

    [1] Jingqian Xu, Ma Zhu, Baojun Qi, Jiangshan Li, Chunfang Yang . AENet: attention efficient network for cross-view image geo-localization. Electronic Research Archive, 2023, 31(7): 4119-4138. doi: 10.3934/era.2023210
    [2] Rui Wang, Haiqiang Li, Chen Hu, Xiao-Jun Wu, Yingfang Bao . Deep Grassmannian multiview subspace clustering with contrastive learning. Electronic Research Archive, 2024, 32(9): 5424-5450. doi: 10.3934/era.2024252
    [3] Shuaiqun Wang, Huiqiu Chen, Wei Kong, Xinqi Wu, Yafei Qian, Kai Wei . A modified FGL sparse canonical correlation analysis for the identification of Alzheimer's disease biomarkers. Electronic Research Archive, 2023, 31(2): 882-903. doi: 10.3934/era.2023044
    [4] Zhongnian Li, Jiayu Wang, Qingcong Geng, Xinzheng Xu . Group-based siamese self-supervised learning. Electronic Research Archive, 2024, 32(8): 4913-4925. doi: 10.3934/era.2024226
    [5] Li Sun, Bing Song . Feature adaptive multi-view hash for image search. Electronic Research Archive, 2023, 31(9): 5845-5865. doi: 10.3934/era.2023297
    [6] Qianpeng Xiao, Changbin Shao, Sen Xu, Xibei Yang, Hualong Yu . CCkEL: Compensation-based correlated k-labelsets for classifying imbalanced multi-label data. Electronic Research Archive, 2024, 32(5): 3038-3058. doi: 10.3934/era.2024139
    [7] Jie Zheng, Yijun Li . Machine learning model of tax arrears prediction based on knowledge graph. Electronic Research Archive, 2023, 31(7): 4057-4076. doi: 10.3934/era.2023206
    [8] Bojian Chen, Wenbin Wu, Zhezhou Li, Tengfei Han, Zhuolei Chen, Weihao Zhang . Attention-guided cross-modal multiple feature aggregation network for RGB-D salient object detection. Electronic Research Archive, 2024, 32(1): 643-669. doi: 10.3934/era.2024031
    [9] Hui Jiang, Di Wu, Xing Wei, Wenhao Jiang, Xiongbo Qing . Discriminator-free adversarial domain adaptation with information balance. Electronic Research Archive, 2025, 33(1): 210-230. doi: 10.3934/era.2025011
    [10] Jicheng Li, Beibei Liu, Hao-Tian Wu, Yongjian Hu, Chang-Tsun Li . Jointly learning and training: using style diversification to improve domain generalization for deepfake detection. Electronic Research Archive, 2024, 32(3): 1973-1997. doi: 10.3934/era.2024090
  • This article presents the Parabolic-Monge-Ampere (PMA) method for numerical solutions of two-dimensional fourth-order parabolic thin film equations with constant flux boundary conditions. We track the PMA technique, which employs special functions to acclimate and force the mesh moving associated with the physical PDE representing the thin liquid film equation. The accuracy and convergence of the PMA approach are investigated numerically using a one two-dimensional problem. Comparing the results of this method to the uniform mesh finite difference scheme, the computing effort is reduced.



    Correlation analysis deals with data with cross-view feature representations. To handle such tasks, many correlation learning approaches have been proposed, among which canonical correlation analysis (CCA) [1,2,3,4,5] is a representative method and has been widely employed [6,7,8,9,10,11]. To be specific, given training data with two or more feature-view representations, the traditional CCA method comes to seek a projection vector for each of the views while maximizing the cross-view correlations. After the data are mapped along the projection directions, subsequent cross-view decisions can be made [4]. Although CCA yields good results, a performance room is left since the data labels are not incorporated in learning.

    When class labels information is also provided or available, CCA can be remodeled to its discriminant form by making use of the labels. To this end, Sun et al. [12] proposed a discriminative variant of CCA (i.e., DCCA) by enlarging distances between dissimilar samples while reducing those of similar samples. Subsequently, Peng et al. [13] built a locally-discriminative version of CCA (i.e., LDCCA) based on the assumption that the data distributions follow low-dimensional manifold embedding. Besides, Su et al. [14] established a multi-patch embedding CCA (MPECCA) by developing multiple metrics rather than a single one to model within-class scatters. Afterwards, Sun et al. [15] built a generalized framework for CCA (GCCA). Ji et al. [16] remodeled the scatter matrices by deconstructing them into several fractional-order components and achieved performance improvements.

    In addition to directly constructing a label-exploited version of CCA, the supervised labels can be utilized by embedding them as regularization terms. Along this direction, Zhou et al. [17] presented CECCA by embedding LDA-guided [18] feature combinations into the objective function of CCA. Furthermore, Zhao et al. [19] constructed HSL-CCA by reducing inter-class scatters within their local neighborhoods. Later, Haghighat et al. [20] proposed the DCA model by deconstructing the inter-class scatter matrix guided by class labels. Previous variants of CCA were designed to cater for two-view data and cannot be used directly to handle multi-view scenarios. To overcome this shortcoming, many CCA methods have been proposed, such as GCA [21], MULDA [22] and FMDA [23].

    Although the aforementioned methods have achieved successful performances of varying extent, unfortunately, the objective functions of nearly all of them are not convex [14,24,25]. Although CDCA [26] yields closed-form solutions and better results than the previous methods.

    To overcome these shortcomings, we firstly design a discriminative correlation learning with manifold preservation, coined as DCLMP, in which, not only the cross-view discriminative information but also the spatial structural information of training data is taken into account to enhance subsequent decision making. To pursue closed-form solutions, we remodel the objective of DCLMP from the Euclidean space to a geodesic space. In this way, we obtain a convex formulation of DCLMP (C-DCLMP). Finally, we comprehensively evaluated the proposed methods and demonstrated their superiority on both toy and real data sets. To summarize, our contributions are three-fold as follows:

    1. A DCLMP is constructed by modelling both cross-view discriminative information and spatial structural information of training data.

    2. The objective function of DCLMP is remodelled to obtain its convex formulation (C-DCLMP).

    3. The proposed methods are evaluated with extensive experimental comparisons.

    This paper is organized as follows. Section 2 reviews related theories of CCA. Section 3 presents models and their solving algorithms. Then, experiments and comparisons are reported to evaluate the methods in Section 4. Section 5 concludes and provides future directions.

    In this section, we briefly review the works on multi-view learning, which aims to study how to establish constraints or dependencies between views by modeling and discovering the interrelations between views. There exist studies about multi-view learning. Tang et al. [27] proposed a multi-view feature selection method named CvLP-DCL, which divided the label space into a consensus part and a domain-specific part and explored the latent information between different views in the label space. Additionally, CvLP-DCL explored how to combine cross-domain similarity graph learning with matrix-induced regularization to boost the performance of the model. Tang et al. [28] also proposed UoMvSc for multi-view learning, which mined the value of view-specific graphs and embedding matrices by combining spectral clustering with k-means clustering. In addition, Wang et al.[29] proposed an effective framework for multi-view learning named E2OMVC, which constructed the latent feature representation based on anchor graphs and the clustering indicator matrix about multi-view data to obtain better clustering results.

    We briefly review related theories of CCA [1,2]. Given two-view feature representations of training data, CCA seeks two projection matrices respectively for the two views, while preserving the cross-view correlations. To be specific, let X=[x1,...,xN]Rp×N and Y=[y1,...,yN]Rq×N be two view representations of N training samples, with xi and yi denoting normalized representations of the ith sample. Besides, let WxRp×r and WyRq×r denote the projection matrices mapping the training data from individual view spaces into a r-dimensional common space. Then, the correlation between WTxxi and WTyyi should be maximized. Consequently, the formal objective of CCA can be formulated as

    max{Wx,Wy}WTxCxyWyWTxCxxWxWTyCyyWy, (2.1)

    where Cxx=1NNi=1(xi¯x)(xi¯x)T, Cyy=1NNi=1(yi¯y)(yi¯y)T, and Cxy=1NNi=1(xi¯x)(yi¯y)T, where ¯x=1NNi=1xi and ¯y=1NNi=1yi respectively denote the sample means of the two views. The numerator describes the sample correlation in the projected space, while the denominator limits the scatter for each view. Typically, Eq (2.1) is converted to a generalized eigenvalue problem as

    (XYTYXT)(WxWy)=λ(XXTYYT)(WxWy). (2.2)

    Then, (WxWy) can be achieved by computing the largest r eigenvectors of

    (XXTYYT)1(XYTYXT).

    After Wx and Wy are obtained, xi and yi can be concatenated as WTxxi+WTyyi=(WxWy)T(xiyi). With the concatenated feature representations are achieved, subsequent classification or regression decisions can be made.

    The most classic work of discriminative CCA is DCCA [12], which is shown as follows:

    maxwx,wy(wTxCwwyηwTxCbwy) s.t. wTxXXTwx=1,wTyYYTwy=1 (2.3)

    It is easy to find that DCCA is discriminative because DCCA needs instance labels to calculate the relationship between each class. Similar to DCCA, Peng et al. [13] proposed LDCCA which is shown as follows:

    maxwx,wywTxCxywy(wTx˜Cxxwx)(wTyCyywy) s.t. wTxXXTwx=1,wTyYYTwy=1 (2.4)

    where ˜Cxy=CwηCbCw. Compared with DCCA, LDCCA consider the local correlations of the within-class sets and the between-class sets. However, these methods do not consider the problem of multimodal recognition or feature level fusion. Haghighat et al. [20] proposed DCA which incorporates the class structure, i.e., memberships of the samples in classes, into the correlation analysis. Additionally, Su et al. [14] proposed MPECCA for multi-view feature learning, which is shown as follows:

    maxu,v,w(χ)j,w(y)ruT(Ni=1Mj=1Mr=1(w(x)jw(y)r)XS(x)ijLiS(y)TirYT)v s.t. uTSwxu=1,vTSwyv=1Mj=1w(x)j=1,w(x)j (2.5)

    where u and v means correlation projection matrices. Considering combining LDA and CCA, CECCA was proposed [17]. The optimization objective of CECCA was shown as follows:

    \begin{equation} \begin{aligned} \max _{w_{x}, w_{y}} & \boldsymbol{w}_{x}^{\mathrm{T}} \boldsymbol{X}(\boldsymbol{I}+2 \boldsymbol{A}) \boldsymbol{Y}^{\mathrm{T}} \boldsymbol{w}_{y}+\boldsymbol{w}_{x}^{\mathrm{T}} \boldsymbol{X} \boldsymbol{A} \boldsymbol{T}^{\mathrm{T}} \boldsymbol{w}_{x}+\boldsymbol{w}_{y}^{\mathrm{T}} \boldsymbol{Y} \boldsymbol{A} \boldsymbol{X}^{\mathrm{T}} \boldsymbol{w}_{y} \\ \text { s. t. }& \boldsymbol{w}_{x}^{\mathrm{T}} \boldsymbol{X} \boldsymbol{X}^{\mathrm{T}} \boldsymbol{w}_{x}+\boldsymbol{w}_{y}^{\mathrm{T}} \boldsymbol{Y} \boldsymbol{Y}^{\mathrm{T}} \boldsymbol{w}_{y} = 2 \end{aligned} \end{equation} (2.6)

    where \boldsymbol{A} = 2\boldsymbol{U}-\boldsymbol{I} , \boldsymbol{I} means Identity matrix. On the basis of CCA, CECCA combined with discriminant analysis to realize the joint optimization of correlation and discriminant of combined features, which makes the extracted features more suitable for classification. However, these methods cannot achieve the closed form solution. CDCA [26] combined GMML and discrimative CCA and then achieve the closed form solution in Riemannian manifold space, the optimization objective was shown as follows:

    \begin{equation} \begin{aligned} \min _{\bf{A} \succ \bf{0}} \gamma \operatorname{tr}(\boldsymbol{A} \boldsymbol{C})+(1-\gamma)\left(\operatorname{tr}\left(\boldsymbol{A} \boldsymbol{S}_{Z}\right)+\operatorname{tr}\left(\boldsymbol{A}^{-1} \boldsymbol{D}_{Z}\right)\right) = \\ \operatorname{tr}\left(\boldsymbol{A}\left(\gamma \boldsymbol{C}+(1-\gamma) \boldsymbol{S}_{Z}\right)\right)+\operatorname{tr}\left(\boldsymbol{A}^{-1}(1-\gamma) \boldsymbol{D}_{Z}\right) \end{aligned} \end{equation} (2.7)

    From Eq (2.7) and CDCA [26] we can find with the help of discrimative part and closed form solution, the multi-view learning will easily get the the global optimality of solutions and achieve a good result.

    CCA suffer from three main problems: (1) the similarity and dissimilarity across views are not modeled; (2) although the data labels can be exploited by imposing supervised constraints, their objective functions are nonconvex; (3) the cross-view correlations are modeled in Euclidean space through RKHS kernel transformation [30,31] whose discriminating ability is obviously limited.

    We present a novel cross-view learning model, called DCLMP, in which not only the with-class and between-class scatters are characterized, but also the similarity and dissimilarity of the training data across views are modelled for utilization. Although many preferable characteristics are incorporated in DCLMP, it still suffers from non-convexity for its objective function. To facilitate pursuing global optimal solutions, we further remodel DCLMP to the Riemannian manifold space to make the objective function convex. The proposed method is named as C-DCLMP.

    Assume we are given N training instances sampled from K classes with two views of feature representations, i.e., {\textbf X} = [{\textbf X}_1, {\textbf X}_2, \cdot \cdot \cdot, {\textbf X}_K] \in \mathbb{R}^{p\times N} with {\textbf X}_k = [{\textbf x}_1^k, {\textbf x}_2^k, \cdot \cdot \cdot, {\textbf x}_{N_k}^k] being N_k x-view instances from the k -th class and {\textbf Y} = [{\textbf Y}_1, {\textbf Y}_2, \cdot \cdot \cdot, {\textbf Y}_K] \in \mathbb{R}^{q\times N} with {\textbf Y}_k = [{\textbf y}_1^k, {\textbf y}_2^k, \cdot \cdot \cdot, {\textbf y}_{N_k}^k] being N_k y-view instances from the k -th class, where {\textbf y}_1^k and {\textbf x}_1^k stand for two view representations from the same instance. In order to concatenate them for subsequent classification, we denote U \in \mathbb{R}^{p\times r} and V \in \mathbb{R}^{q\times r} as projection matrices for the two views to transform their representations to a r -dimensional common space.

    To perform cross-view learning while exploring supervision knowledge in terms of similar and dissimilar relationships among instances in each view and across the views, as well as sample distribution manifolds, we construct DCLMP. To this end, we should construct the model by taking into account the following aspects: 1) distances between similar instances from the same class should be reduced while those among dissimilar from different classes should be enlarged, in levels of intra-view and inter-view; 2) manifold structures embedded in similar and dissimilar instances should be preserved. These modelling considerations are intuitively demonstrated in Figure 1.

    Figure 1.  Modelling strategy of DCLMP. Here, circular and triangular shapes represent samples from two different classes, while filling in different colors represents different view representations. The samples are distributed dispersedly in original feature representation space (a); however, in DCLMP projection space (b), similar samples are pushed nearer while dissimilar from different classes are pulled apart from each other, while their manifold relations are preserved.

    Along this line, we construct the objective function of DCLMP as follows:

    \begin{equation} { \begin{split} & \min\limits_{\{{\textbf U}, {\textbf V}\}} \; \frac{1}{N}\sum\limits_{i = 1}^{N} \frac{1}{N} \sum\limits_{j = 1}^{N}\|{\textbf U}^T{\textbf x}_i-{\textbf V}^T{\textbf y}_j\|_2^2\cdot {\textbf L}_{ij}\\ & \quad + \frac{\lambda_1}{K}\sum\limits_{k = 1}^{K}\frac{1}{N_k}\sum\limits_{i = 1}^{N_k}\frac{1}{k_{n}}\sum\limits_{j = 1}^{k_{n}}\Bigg\{\|{\textbf U}^T{\textbf x}_i^k-{\textbf U}^T{\textbf x}_j^k\|_F^2\cdot {\textbf S}_{ij}^{w_x} \\ & \qquad + \|{\textbf V}^T{\textbf y}_i^k-{\textbf V}^T{\textbf y}_j^k\|_F^2\cdot {\textbf S}_{ij}^{w_y}\Bigg\} \\ & \quad - \frac{\lambda_2}{K}\sum\limits_{k = 1}^K\sum\limits_{h\neq k}\frac{1}{N_k}\sum\limits_{i = 1}^{N_k}\frac{1}{k_{n}}\sum\limits_{j = 1}^{k_{n}}\Bigg\{\|{\textbf U}^T{\textbf x}_i^k-{\textbf U}^T{\textbf x}_j^h\|_F^2\cdot {\textbf S}_{ij}^{b_x}\\ & \qquad + \|{\textbf V}^T{\textbf y}_i^k - {\textbf V}^T{\textbf y}_j^h\|_F^2\cdot {\textbf S}_{ij}^{b_y}\Bigg\} \end{split} } \end{equation} (3.1)

    where {\textbf U} and {\textbf V} denote the projection matrices in the r -dimensional common space of two views and k_{n} denotes the k -nearest neighbors of an instance. {\textbf L} is the discriminative weighting matrix. {\textbf S}^{w_x} and {\textbf S}^{w_y} stand for the within-class manifold weighting matrices of two different views of feature representations, and {\textbf S}^{b_x} and {\textbf S}^{b_y} stand for the between-class manifold weighting matrices of two different views of feature representations. Their elements are defined as follows

    \begin{equation} { \begin{split} & {\textbf L}_{ij} = \begin{cases} \frac{1}{N_k}& {\textbf x}_i \ \text{and}\ {\textbf y}_j \ \text{are from the same class}\\ 0& {\textbf x}_i\ \text{and}\ {\textbf y}_j\ \text{are from different classes} \end{cases} \end{split} } \end{equation} (3.2)
    \begin{equation} { \begin{split} & {\textbf S}_{ij}^{w_x} = \begin{cases} exp\left(-\frac{\|{\textbf x}_i^k-{\textbf x}_j^k\|^2}{\sigma_x^2}\right)& { {\textbf x}_j^k \in {\rm{ KNN}} _{k_n} ( {\textbf x}_i^k ) }\\ 0 & { {\textbf x}_j^k \notin {\rm{KNN}} _{k_n} ( {\textbf x}_i^k ) } \end{cases} \end{split} } \end{equation} (3.3)
    \begin{equation} { \begin{split} & {\textbf S}_{ij}^{w_y} = \begin{cases} exp\left(-\frac{\|{\textbf y}_i^k-{\textbf y}_j^k\|^2}{\sigma_y^2}\right)& { {\textbf y}_j^k \in {\rm{KNN}} _{k_n} ( {\textbf y}_i^k ) }\\ 0 & { {\textbf y}_j^k \notin {\rm{KNN}} _{k_n} ( {\textbf y}_i^k ) } \end{cases} \end{split} } \end{equation} (3.4)
    \begin{equation} { \begin{split} & {\textbf S}_{ij}^{b_x} = \begin{cases} exp\left(-\frac{\|{\textbf x}_i^k-{\textbf x}_j^h\|^2}{\sigma_x^2}\right)& { {\textbf x}_j^h \in {\rm{KNN}} _{k_n} ( {\textbf x}_i^k ) }\\ 0 & { {\textbf x}_j^h \notin {\rm{KNN}} _{k_n} ( {\textbf x}_i^k ) } \end{cases} \end{split} } \end{equation} (3.5)
    \begin{equation} { \begin{split} & {\textbf S}_{ij}^{b_y} = \begin{cases} exp\left(-\frac{\|{\textbf y}_i^k-{\textbf y}_j^h\|^2}{\sigma_y^2}\right)& { {\textbf y}_j^h \in {\rm{KNN}} _{k_n} ( {\textbf y}_i^k ) }\\ 0 & { {\textbf y}_j^h \notin {\rm{KNN}} _{k_n} ( {\textbf y}_i^k ) } \end{cases} \end{split} } \end{equation} (3.6)

    where KNN _{k_n} denotes the k_n -nearest neighbors of an instance. \sigma_x and \sigma_y stand for width coefficients to normalize the weights.

    In Eq (3.1), the first part characterizes the cross-view similarity and dissimilarity discriminations, the second part preserves the manifold relationships within each class scatters, while the third part magnifies the distribution margins for a dissimilar pair of instances. In this way, both the discriminative information and the manifold distributions can be modelled in a joint objective function.

    For convenience of solving Eq (3.1), we transform it as the following concise form

    \begin{equation} { \begin{split} & \min\limits_{{\textbf A} \succ 0} \; tr\big({\textbf A}({\textbf C}+\lambda_1 {\textbf S}_z - \lambda_2 {\textbf D}_z)\big) \end{split} } \end{equation} (3.7)

    with

    \begin{equation} { \begin{split} & {\textbf A} = \left[ \begin{aligned} {\textbf U} \\ {\textbf V} \end{aligned} \right] \left[ \begin{aligned} {\textbf U} \\ {\textbf V} \end{aligned} \right]^T \end{split} } \end{equation} (3.8)
    \begin{equation} { \begin{split} & {\textbf C} = \left[ \begin{aligned} {\textbf 1}_{p\times p} \\ {\textbf 0}_{q\times p} \end{aligned} \right] {\textbf X}{\textbf M}^L{\textbf X}^T[{\textbf 0}_{p\times q}, {\textbf 1}_{q\times q}]+ \left[ \begin{aligned} {\textbf 0}_{p\times q} \\ {\textbf 1}_{q\times q} \end{aligned} \right] {\textbf Y}{\textbf M}^L{\textbf Y}^T[{\textbf 0}_{q\times p}, {\textbf 1}_{q\times q}]-\\ & \qquad \left[ \begin{aligned} {\textbf 1}_{p\times p} \\ {\textbf 0}_{q\times p} \end{aligned} \right] {\textbf X}{\textbf L}{\textbf Y}^T[{\textbf 0}_{q\times p}, {\textbf 1}_{q\times q}]- \left[ \begin{aligned} {\textbf 0}_{p\times q} \\ {\textbf 1}_{q\times q} \end{aligned} \right] {\textbf Y}{\textbf L}{\textbf X}^T[{\textbf 1}_{p\times p}, {\textbf 0}_{p\times q}] \end{split} } \end{equation} (3.9)
    \begin{equation} { \begin{split} & {\textbf S}_z = \sum\limits_{k = 1}^{K}\Bigg\{\left[ \begin{aligned} {\textbf 1}_{p\times p} \\ {\textbf 0}_{q\times p} \end{aligned} \right]{\textbf X}^k\left({\textbf M}^{w_x}+{{\textbf M}^{w_x}}^T-{\textbf S}^{w_x}-{{\textbf S}^{w_x}}^T\right){{\textbf X}^k}^T[{\textbf 1}_{p\times p}, {\textbf 0}_{p\times q}]\\ & \qquad + \left[ \begin{aligned} {\textbf 0}_{p\times q} \\ {\textbf 1}_{q\times q} \end{aligned} \right]{\textbf Y}^k\left({\textbf M}^{w_y}+{{\textbf M}^{w_y}}^T-{\textbf S}^{w_y}-{{\textbf S}^{w_y}}^T\right){{\textbf Y}^k}^T[{\textbf 0}_{q\times p}, {\textbf 1}_{q\times q}] \Bigg\} \end{split} } \end{equation} (3.10)
    \begin{equation} { \begin{split} & {\textbf D}_z = \Bigg\{\left[ \begin{aligned} {\textbf 1}_{p\times p} \\ {\textbf 0}_{q\times p} \end{aligned} \right]{\textbf X}\left({\textbf M}^{b_x}+{{\textbf M}^{b_x}}^T-{\textbf S}^{b_x}-{{\textbf S}^{b_x}}^T\right){\textbf X}^T[{\textbf 1}_{p\times p}, {\textbf 0}_{p\times q}]\\ & \qquad + \left[ \begin{aligned} {\textbf 0}_{p\times q} \\ {\textbf 1}_{q\times q} \end{aligned} \right]{\textbf Y}\left({\textbf M}^{b_y}+{{\textbf M}^{b_y}}^T-{\textbf S}^{b_y}-{{\textbf S}^{b_y}}^T\right){\textbf Y}^T[{\textbf 0}_{q\times p}, {\textbf 1}_{q\times q}] \Bigg\} \end{split} } \end{equation} (3.11)

    where {\textbf M}^{w_x}_{ii} = \sum_{j = 1}^{N}{\textbf S}_{ij}^{w_x} , {\textbf M}^{w_y}_{ii} = \sum_{j = 1}^{N}{\textbf S}_{ij}^{w_y} , {\textbf M}^{b_x}_{ii} = \sum_{j = 1}^{N}{\textbf S}_{ij}^{b_x} , {\textbf M}^{b_y}_{ii} = \sum_{j = 1}^{N}{\textbf S}_{ij}^{b_y} , and {\textbf M}^{L}_{ii} = \sum_{j = 1}^{N}{\textbf L}_{ij} .

    We let \mathcal{J} record the objective function value of Eq (3.7) and introduce {\textbf Q}^T{\textbf Q} = {\textbf I} to replace {\textbf A} and {\Lambda} to rewrite Eq (3.7) as

    \begin{equation} { \begin{split} & \mathcal{J}_{\{{\textbf Q}, {\Lambda}\}} = tr\big({\textbf Q}^T\left({\textbf C}+\lambda_1 {\textbf S}_z - \lambda_2 {\textbf D}_z\right){\textbf Q}\big) - tr\big({\Lambda}({\textbf Q}^T{\textbf Q} - {\textbf I})\big). \end{split} } \end{equation} (3.12)

    Calculating the partial derivative of \mathcal{J}_{\{{\textbf Q}, {\Lambda}\}} with regard to Q and making it to zero yields

    \begin{equation} { \begin{split} & ({\textbf C}+\lambda_1 {\textbf S}_z - \lambda_2 {\textbf D}_z){\textbf Q} = {\textbf Q}{\Lambda}, \end{split} } \end{equation} (3.13)

    The projection matrix {\textbf Q} can be obtained by calculating a required number of smallest eigenvectors of {\textbf C}+\lambda_1 {\textbf S}_z - \lambda_2 {\textbf D}_z . Finally, we can recover {\textbf A} = {\textbf Q}{\textbf Q}^T . Then, {\textbf U} and {\textbf V} can be obtained through Eq (3.8).

    We find that such a objective function may be not convex [32,33]. The separability of nonlinear data patterns in the geodesic space can be significantly improved and thus benefits their subsequent recognitions. Referring to \min_{{\textbf A}\succ 0} \; tr({\textbf A}^{-1}\bullet) \Leftrightarrow \max_{{\textbf A} \succ 0} \; tr({\textbf A}\bullet) [34], we reformulate DCLMP in (3.7) equivalently as

    \begin{equation} { \begin{split} & \min\limits_{{\textbf A} \succ 0} \; tr\big({\textbf A}{\textbf C}+\lambda_1 {\textbf A}{\textbf S}_z + \lambda_2{\textbf A}^{-1}{\textbf D}_z\big) \Leftrightarrow \min\limits_{{\textbf A} \succ 0} \; tr({\textbf A}{\textbf C})+\lambda_1tr({\textbf A}{\textbf S}_z) + \lambda_2tr({\textbf A}^{-1}{\textbf D}_z), \end{split} } \end{equation} (3.14)

    Minimizing the third term \lambda_1 tr({\textbf A}^{-1}{\textbf D}_z) is equivalent to minimizing -\lambda_1 tr\left({\textbf AD}_z\right) of Eq (3.7). Although the last term is nonlinear, it is defined in the convex cone space [35] and thus is still convex. As a result, Eq (3.14) is entirely convex regarding A . It enjoys closed-form solution [36,37,38]. To distinguish Eq (3.14) from DCLMP, we call it C-DCLMP.

    For convenience of deriving the closed-form solution, we reformulate Eq (3.14) as

    \begin{equation} { \begin{split} & \min\limits_{{\textbf A} \succ 0} \; \gamma tr({\textbf A}^{-1}{\textbf D}_z) + (1-\gamma)\left(tr({\textbf AS}_z\right) + \alpha tr\left({\textbf AC})\right), \end{split} } \end{equation} (3.15)

    where we set \gamma \in (0, 1) [34]. Let J({\textbf A}): = \gamma tr({\textbf A}^{-1}{\textbf D}_z) + (1-\gamma)\left(tr\left({\textbf AS}_z\right) + \alpha tr\left({\textbf AC}\right)\right) .

    \begin{equation} { \begin{split} & (1-\gamma){\textbf A}({\textbf S}_z+\alpha{\textbf C}){\textbf A} = \gamma {\textbf D}_z, \\ \end{split} } \end{equation} (3.16)

    whose solution is the midpoint of the geodesic jointing ((1-\gamma)({\textbf S}_z+\alpha{\textbf C}))^{-1} and \gamma{\textbf D}_z , that is

    \begin{equation} { \begin{split} & {\textbf A} = \big((1-\gamma)({\textbf S}_z+\alpha{\textbf C})\big)^{-1}\sharp_{1/2}(\gamma {\textbf D}_z), \\ \end{split} } \end{equation} (3.17)

    (\cdot)\sharp_{1/2}(\cdot) denotes the midpoint. We extend the geodesic mean solution (3.17) to the geodesic space by replacing (\cdot)\sharp_{1/2}(\cdot) with (\cdot)\sharp_{t}(\cdot) , 0\leqslant t\leqslant 1 .

    We add a regularizer with prior knowledge to (3.15). Here, we incorporate symmetrized LogDet divergence and consequently (3.15) becomes

    \begin{equation} { \begin{split} & \min\limits_{{\textbf A}\succ 0} \; \gamma tr({\textbf A}^{-1}{\textbf D}_z) + (1-\gamma)(tr\left({\textbf AS}_z\right) + \alpha tr\left({\textbf AC}\right)) \\ & \qquad + \lambda D_{sld}({\textbf A}, {\textbf A}_0), \end{split} } \end{equation} (3.18)
    \begin{equation} { \begin{split} & D_{sld}({\textbf A}, {\textbf A}_0) \; = tr({\textbf AA}_0^{-1})+tr({\textbf A}^{-1}{\textbf A}_0)-2(p+q), \end{split} } \end{equation} (3.19)

    where (p + q) is the dimension of the data. Fortunately, complying with the definition of geometric mean [36], Eq (3.18) is still convex. We let G({\textbf A}): = \gamma tr({\textbf A}^{-1}{\textbf D}_z) + (1-\gamma)(tr\left({\textbf AS}_z\right) + \alpha tr\left({\textbf AC}\right)) + \lambda D_{sld}({\textbf A}, {\textbf A}_0) . Then we set the gradient of G({\textbf A}) regarding to {\textbf A} to zero and obtain the equation as

    \begin{equation} { \begin{split} & (1-\gamma){\textbf A}({\textbf S}_z+\alpha{\textbf C}){\textbf A}+\lambda {\textbf A}{\textbf A}_0^{-1}{\textbf A} = \gamma {\textbf D}_z+\lambda {\textbf A}_0, \\ \end{split} } \end{equation} (3.20)

    we calculate the closed-form solution as

    \begin{equation} { \begin{split} & {\textbf A} = ((1-\gamma)({\textbf S}_z+\alpha{\textbf C})+\lambda {\textbf A}_0^{-1})^{-1}\sharp_{t}(\gamma {\textbf D}_z+\lambda {\textbf A}_0).\\ \end{split} } \end{equation} (3.21)

    More precisely, according to the definition of (\cdot)\sharp_{t}(\cdot) , namely the geodesic mean jointing two matrices, we can directly expand the final solution of our C-DCLMP in Eq (3.18) as

    \begin{equation} { \begin{split} & {\textbf A} = ((1-\gamma)({\textbf S}_z+\alpha{\textbf C})+\lambda {\textbf A}_0^{-1})^{-1}\sharp_{t}(\gamma {\textbf D}_z+\lambda {\textbf A}_0)\\ & \; \; = \left((1-\gamma)({\textbf S}_z+\alpha{\textbf C})+\lambda {\textbf A}_0^{-1}\right)^{1/2}\\ & \quad \Big(\left((1-\gamma)({\textbf S}_z+\alpha{\textbf C})+\lambda {\textbf A}_0^{-1}\right)^{-1/2}(\gamma {\textbf D}_z+\lambda {\textbf A}_0)\\ & \quad \left((1-\gamma)({\textbf S}_z+\alpha{\textbf C})+\lambda {\textbf A}_0^{-1}\right)^{-1/2}\Big)^{t}\\ & \quad \left((1-\gamma)({\textbf S}_z+\alpha{\textbf C})+\lambda {\textbf A}_0^{-1}\right)^{1/2}. \end{split} } \end{equation} (3.22)

    where we set {\textbf A}_0 to be a (p + q) -order identity matrix {\textbf I}_{p+q} . When obtaining {\textbf A} , {\textbf U} and {\textbf V} are recovered.

    Its concatenated representation can be generated by {\textbf U}^T{\textbf x}+{\textbf V}^T{\textbf y} = \left[\begin{aligned} {\textbf U} \\ {\textbf V} \end{aligned} \right]^T\left[\begin{aligned} {\textbf x}\\ {\textbf y} \end{aligned} \right] and the classification decision using a classifier (e.g., KNN) can be made on this fused representation.

    To comprehensively evaluate the proposed methods, we first performed comparative experiments on several benchmark and real face datasets. Besides, we also performed sensibility analysis on the model parameters.

    For evaluation and comparisons, CCA [1], DCCA [12], MPECCA [14], CECCA [17], DCA [20] and CDCA [26] were implemented. All hyper-parameters were cross-validated in the range of [0, 0.1, ..., 1] for t and \gamma , and [1e-7, 1e-6, ..., 1e3] for \alpha and \lambda . For concatenated cross-view representations, a 5 -nearest-neighbors classifier was employed for classification. Additionally, recognition accuracy (%, higher is better) and mean absolute errors (MAE, lower is better) were adopted as performance measures.

    We first performed experiments on several widely used non-face multi-view datasets, i.e., MFD [39] and USPS [40], AWA [41] and ADNI [42]. We report the results in Table 1.

    Table 1.  Recognition accuracy (%) comparison on the non-face datasets.
    Dataset View Represenations CCA DCA MPECCA DCCA CECCA CDCA DCLMP (ours) C-DCLMP (ours)
    MFD fac fou 80.22 \pm 0.9 80.00 \pm 0.2 90.64 \pm 1.3 95.15 \pm 0.9 96.46 \pm 2.4 98.11 \pm 0.3 94.49 \pm 1.7 98.03 \pm 0.3
    fac kar 92.12 \pm 0.5 90.10 \pm 0.8 95.39 \pm 0.6 95.33 \pm 0.7 96.52 \pm 1.2 97.06 \pm 0.4 96.86 \pm 0.5 97.93 \pm 0.6
    fac mor 78.22 \pm 0.8 63.22 \pm 4.3 72.32 \pm 2.4 95.22 \pm 0.9 94.23 \pm 1.0 98.13 \pm 0.3 90.97 \pm 2.5 97.63 \pm 0.3
    fac pix 83.02 \pm 1.2 90.20 \pm 0.5 94.65 \pm 0.5 65.60 \pm 1.1 93.67 \pm 2.9 97.52 \pm 0.4 97.45 \pm 0.5 97.21 \pm 0.4
    fac zer 84.00 \pm 0.6 71.50 \pm 2.2 93.79 \pm 0.7 96.00 \pm 0.6 97.04 \pm 0.6 97.03 \pm 0.4 95.98 \pm 0.3 97.75 \pm 0.4
    fou kar 90.11 \pm 1.0 75.42 \pm 5.6 93.98 \pm 0.4 89.12 \pm 4.3 96.90 \pm 0.5 97.19 \pm 0.6 97.45 \pm 0.4 97.45 \pm 0.3
    fou mor 70.22 \pm 0.4 55.82 \pm 4.6 60.62 \pm 1.6 82.30 \pm 0.9 78.25 \pm 0.6 83.81 \pm 0.7 82.09 \pm 1.0 84.80 \pm 0.6
    fou pix 68.44 \pm 0.4 76.10 \pm 4.7 78.24 \pm 1.1 90.41 \pm 3.2 76.28 \pm 1.3 96.11 \pm 0.5 97.62 \pm 0.4 97.74 \pm 0.3
    fou zer 74.10 \pm 0.9 62.80 \pm 4.1 79.38 \pm 1.2 79.53 \pm 4.5 83.16 \pm 1.4 85.98 \pm 0.9 85.33 \pm 1.1 86.56 \pm 1.0
    kar mor 64.09 \pm 0.6 82.00 \pm 1.6 72.92 \pm 2.7 91.95 \pm 2.8 91.89 \pm 0.6 97.28 \pm 0.5 96.83 \pm 0.5 97.14 \pm 0.4
    kar pix 88.37 \pm 0.9 88.85 \pm 0.8 95.07 \pm 0.6 92.59 \pm 2.0 95.98 \pm 0.3 94.68 \pm 0.5 97.54 \pm 0.4 97.31 \pm 0.5
    kar zer 90.77 \pm 1.0 75.97 \pm 2.8 94.17 \pm 0.6 88.47 \pm 2.9 93.57 \pm 0.9 96.69 \pm 0.4 96.98 \pm 0.4 97.42 \pm 0.4
    mor pix 68.66 \pm 1.5 82.01 \pm 2.1 67.21 \pm 2.3 93.04 \pm 0.7 90.08 \pm 1.0 96.89 \pm 0.4 97.20 \pm 0.5 97.19 \pm 0.4
    mor zer 73.22 \pm 0.6 50.35 \pm 1.8 60.95 \pm 1.4 84.55 \pm 0.9 80.59 \pm 0.9 84.19 \pm 0.8 81.75 \pm 1.1 84.29 \pm 0.7
    pix zer 82.46 \pm 0.6 71.16 \pm 2.8 82.81 \pm 1.2 91.67 \pm 2.1 91.81 \pm 1.2 96.30 \pm 0.5 97.35 \pm 0.5 97.30 \pm 0.5
    AWA cq lss 73.11 \pm 2.1 62.08 \pm 0.3 76.19 \pm 1.0 70.51 \pm 1.3 77.53 \pm 1.7 87.80 \pm 2.8 89.03 \pm 1.4 89.80 \pm 1.2
    cq phog 65.21 \pm 1.4 73.10 \pm 1.2 72.42 \pm 1.6 70.15 \pm 0.9 74.51 \pm 2.1 85.58 \pm 2.7 86.71 \pm 2.3 86.81 \pm 1.2
    cq rgsift 60.22 \pm 1.3 61.40 \pm 1.7 78.04 \pm 1.3 82.87 \pm 2.4 82.83 \pm 1.4 90.99 \pm 3.0 93.44 \pm 0.6 94.34 \pm 0.8
    cq sift 74.33 \pm 1.3 61.28 \pm 1.9 77.85 \pm 1.4 83.19 \pm 2.1 80.05 \pm 1.7 81.59 \pm 5.2 87.17 \pm 0.8 90.68 \pm 0.8
    cq surf 75.86 \pm 1.7 69.30 \pm 2.1 79.07 \pm 0.8 73.55 \pm 2.3 81.59 \pm 1.5 93.58 \pm 1.1 94.36 \pm 1.0 95.35 \pm 0.5
    lss phog 69.96 \pm 1.7 59.72 \pm 0.2 68.12 \pm 1.2 64.86 \pm 2.6 71.36 \pm 1.4 80.48 \pm 2.0 81.76 \pm 1.1 81.62 \pm 1.1
    lss rgsift 78.65 \pm 0.9 63.21 \pm 1.3 73.64 \pm 1.0 78.28 \pm 2.8 77.28 \pm 1.4 87.38 \pm 4.3 90.13 \pm 0.7 89.95 \pm 1.0
    lss sift 73.49 \pm 1.0 65.72 \pm 2.1 73.12 \pm 1.4 66.21 \pm 1.6 76.69 \pm 1.7 81.56 \pm 2.4 84.05 \pm 0.9 84.07 \pm 1.9
    lss surf 76.30 \pm 1.4 65.33 \pm 1.8 74.84 \pm 1.6 79.06 \pm 2.8 78.52 \pm 1.3 89.81 \pm 2.5 89.75 \pm 0.8 91.12 \pm 0.7
    phog rgsift 68.18 \pm 1.1 48.38 \pm 1.0 69.49 \pm 2.3 77.37 \pm 1.5 74.41 \pm 1.5 82.76 \pm 1.1 83.57 \pm 1.6 83.68 \pm 1.2
    phog sift 68.26 \pm 1.1 70.24 \pm 1.1 68.97 \pm 1.3 63.16 \pm 1.3 72.14 \pm 1.5 80.50 \pm 1.2 83.57 \pm 1.1 83.75 \pm 1.5
    phog surf 64.57 \pm 1.4 56.94 \pm 0.5 71.55 \pm 1.4 75.68 \pm 1.9 74.43 \pm 2.1 84.97 \pm 2.6 88.02 \pm 1.8 87.34 \pm 0.8
    rgsift sift 71.35 \pm 1.3 58.56 \pm 2.3 72.85 \pm 1.1 75.28 \pm 2.5 76.69 \pm 1.7 90.76 \pm 2.2 93.44 \pm 0.4 93.79 \pm 1.0
    rgsift surf 75.55 \pm 1.3 67.22 \pm 1.6 76.94 \pm 2.2 84.10 \pm 2.4 80.46 \pm 1.7 93.25 \pm 1.2 92.82 \pm 0.8 93.66 \pm 0.8
    sift surf 75.33 \pm 1.3 63.36 \pm 1.6 74.27 \pm 1.2 82.14 \pm 2.7 75.51 \pm 1.1 90.07 \pm 3.4 90.67 \pm 1.0 91.69 \pm 1.1
    ADNI AV FDG 65.47 \pm 1.8 73.28 \pm 2.1 75.28 \pm 2.6 76.25 \pm 2.1 76.26 \pm 2.5 79.59 \pm 1.9 68.64 \pm 3.3 80.86 \pm 2.1
    AV VBM 71.02 \pm 2.4 71.02 \pm 2.8 73.24 \pm 3.1 63.47 \pm 2.1 60.67 \pm 2.7 81.59 \pm 2.5 78.38 \pm 2.5 80.70 \pm 2.8
    FDG VBM 61.37 \pm 1.2 65.28 \pm 1.6 70.37 \pm 2.6 64.05 \pm 1.6 70.95 \pm 1.8 80.12 \pm 2.0 74.97 \pm 2.9 80.21 \pm 1.7
    USPS left right 62.14 \pm 0.6 80.11 \pm 1.2 66.67 \pm 0.9 63.96 \pm 2.0 82.89 \pm 1.9 89.76 \pm 0.3 96.19 \pm 0.7 96.03 \pm 0.6

     | Show Table
    DownLoad: CSV

    The proposed DCLMP method yielded the second-lowest estimation errors in most cases, slightly higher than the proposed C-DCLMP. The improvement achieved by C-DCLMP method is significant, especially on AWA and USPS datasets.

    We also conducted age estimation experiments on AgeDB [43], CACD [44] and IMDB-WIKI [45]. These three databases are illustrated in Figure 2.

    Figure 2.  Face examples from (a) AgeDB, (b) CACD datasets and (c) IMDB-WIKI dataset.

    We extracted BIF [46] and HoG [47] feature vectors and reduced dimensions to 200 by PCA as two view representations. We randomly chose 50,100,150 samples for training. Also, we use VGG19 [48] and Resnet50 [45] to extract deep feature vectors from AgeDB, CACD and IMDB-WIKI databases. We report results in Tables 3, 5 and 6.

    Table 2.  Age estimation results (MAE \pm STD) on AgeDB.
    training samples CCA DCA MPECCA DCCA CECCA CDCA DCLMP (ours) C-DCLMP (ours)
    50 17.70 \pm 0.5 17.78 \pm 0.5 16.10 \pm 0.4 15.93 \pm 0.4 15.62 \pm 0.5 15.48 \pm 0.2 15.59 \pm 0.1 15.16 \pm 0.4
    100 16.81 \pm 0.5 17.23 \pm 0.6 14.74 \pm 0.5 14.79 \pm 0.5 14.67 \pm 0.4 14.57 \pm 0.2 14.60 \pm 0.2 14.13 \pm 0.2
    150 15.43 \pm 0.5 16.25 \pm 0.6 13.83 \pm 0.5 13.49 \pm 0.4 13.43 \pm 0.4 13.21 \pm 0.2 13.48 \pm 0.2 13.19 \pm 0.3

     | Show Table
    DownLoad: CSV
    Table 3.  Age estimation results (MAE \pm STD) on AgeDB (deep features).
    training samples CCA DCA MPECCA DCCA CECCA CDCA DCLMP (ours) C-DCLMP (ours)
    50 16.17 \pm 0.5 16.27 \pm 0.46 15.42 \pm 0.5 15.09 \pm 0.5 14.78 \pm 0.5 14.67 \pm 0.3 14.75 \pm 0.2 14.52 \pm 0.2
    100 15.86 \pm 0.5 15.79 \pm 0.6 14.89 \pm 0.8 14.23 \pm 0.4 14.09 \pm 0.4 13.78 \pm 0.3 14.07 \pm 0.3 13.68 \pm 0.2
    150 15.09 \pm 0.5 14.81 \pm 0.3 13.97 \pm 0.5 13.41 \pm 0.6 13.34 \pm 0.5 13.16 \pm 0.3 13.46 \pm 0.3 13.15 \pm 0.3

     | Show Table
    DownLoad: CSV
    Table 4.  Age estimation results (MAE \pm STD) on CACD.
    training samples CCA DCA MPECCA DCCA CECCA CDCA DCLMP (ours) C-DCLMP (ours)
    50 16.28 \pm 0.5 16.78 \pm 0.4 15.79 \pm 0.4 14.98 \pm 0.4 14.38 \pm 0.4 14.28 \pm 0.4 14.10 \pm 0.3 13.95 \pm 0.3
    100 15.45 \pm 0.4 16.52 \pm 0.5 15.04 \pm 0.5 14.44 \pm 0.4 13.99 \pm 0.4 13.98 \pm 0.3 13.85 \pm 0.2 13.74 \pm 0.2
    150 15.20 \pm 0.5 15.41 \pm 0.5 14.79 \pm 0.4 14.02 \pm 0.5 13.73 \pm 0.5 13.79 \pm 0.2 13.67 \pm 0.1 13.63 \pm 0.3

     | Show Table
    DownLoad: CSV
    Table 5.  Age estimation results (MAE \pm STD) on CACD (deep features).
    training samples CCA DCA MPECCA DCCA CECCA CDCA DCLMP (ours) C-DCLMP (ours)
    50 16.07 \pm 0.6 16.27 \pm 0.4 15.35 \pm 0.4 14.21 \pm 0.6 13.39 \pm 0.4 13.49 \pm 0.3 13.52 \pm 0.3 13.27 \pm 0.2
    100 15.69 \pm 0.5 15.75 \pm 0.3 14.65 \pm 0.5 14.17 \pm 0.5 13.28 \pm 0.3 13.26 \pm 0.3 13.24 \pm 0.2 12.97 \pm 0.4
    150 15.22 \pm 0.4 15.32 \pm 0.4 14.45 \pm 0.3 14.01 \pm 0.6 13.01 \pm 0.3 12.94 \pm 0.3 12.91 \pm 0.3 12.76 \pm 0.4

     | Show Table
    DownLoad: CSV
    Table 6.  Age estimation results (MAE \pm STD) on IMDB-WIKI.
    training samples CCA DCA MPECCA DCCA CECCA CDCA DCLMP (ours) C-DCLMP (ours)
    50 14.29 \pm 0.5 14.39 \pm 0.5 13.49 \pm 0.3 13.04 \pm 0.5 12.26 \pm 0.4 12.37 \pm 0.3 11.84 \pm 0.3 11.65 \pm 0.3
    100 13.97 \pm 0.5 13.87 \pm 0.4 12.79 \pm 0.5 12.35 \pm 0.3 11.96 \pm 0.3 11.86 \pm 0.3 11.53 \pm 0.2 11.13 \pm 0.3
    150 13.43 \pm 0.5 13.56 \pm 0.5 12.48 \pm 0.3 12.26 \pm 0.4 11.66 \pm 0.3 11.65 \pm 0.3 11.45 \pm 0.2 10.98 \pm 0.3

     | Show Table
    DownLoad: CSV

    The estimation errors (MAEs) of all the methods reduced monotonically. The age MAEs of DCLMP are the second lowest, demonstrating the solidness of our modelling cross-view discriminative knowledge and data manifold structures. We can also observe that C-DCLMP yields the lowest estimation errors, demonstrating its effectiveness and superiority.

    For the proposed methods, we performed parameter analysis t , \gamma and \lambda involved in (3.21), respectively. Specifically, we conducted age estimation experiments on both AgeDB and CACD. The results are plotted in Figures 35.

    Figure 3.  Age estimation MAE on AgeDB (left) and CACD (right) with varying t .
    Figure 4.  Age estimation MAE on AgeDB (left) and CACD (right) with varying \gamma .
    Figure 5.  Age estimation MAE on AgeDB (left) and CACD (right) with varying \lambda .

    Geometric weighting parameter t of C-DCLMP: We find some interesting observations from Figure 3. That is, with t increasing from 0 to 1, the estimation error descended first and then rose again. It shows that the similar manifolds within class and the inter-class data distributions are helpful in regularizing the model solution space.

    Metric balance parameter \gamma of C-DCLMP: We can observe from Figure 4 that, the age estimation error (MAE) achieved the lowest values when 0.1 < \gamma < 0.9 . This observation illustrates that preserving the data cross-view discriminative knowledge and the manifold distributions is useful and helps improve the estimation precision.

    Metric prior parameter \lambda of C-DCLMP: Figure 5 shows that, with increased \lambda value, age estimation error descended to its lowest around \lambda = 1e-1 and then increased steeply. It demonstrates that incorporating moderate metric prior knowledge can regularize the model solution positively, but excess prior knowledge may dominate the entire data rule and mislead the training of the model.

    For the proposed methods and the comparison methods mentioned above, we performed time complexity analysis. Specifically, we conducted age estimation experiments on both AgeDB and CACD by choosing 100 samples from each class for training while taking the rest for testing, respectively. We reported the averaged results in Table 7.

    Table 7.  Running time results (MAE \pm STD) on AgeDB and CACD.
    Dataset CCA DCA MPECCA DCCA CECCA CDCA DCLMP (ours) C-DCLMP (ours)
    AgeDB 0.10 \pm 0.12 0.06 \pm 0.05 0.41 \pm 0.03 0.18 \pm 0.03 0.52 \pm 0.02 0.11 \pm 0.10 57.74 \pm 0.71 54.94 \pm 0.32
    CACD 0.09 \pm 0.13 0.06 \pm 0.10 0.38 \pm 0.09 0.15 \pm 0.04 0.47 \pm 0.03 0.07 \pm 0.01 30.86 \pm 0.61 31.04 \pm 0.68

     | Show Table
    DownLoad: CSV

    For the proposed methods, we performed ablation experiments. Specifically, we conducted age estimation experiments on both AgeDB and CACD. We repeated the experiment 10 times with random data partitions and reported the averaged results in Table 8. In Table 8, each referred part corresponds to Eq (3.7).

    Table 8.  Ablation experiment results (MAE \pm STD) on AgeDB and CACD.
    Dataset First part Second part Third part C-DCLMP (ours)(ours)
    AgeDB \checkmark \checkmark 14.49 \pm 0.16
    \checkmark \checkmark 14.51 \pm 0.10
    \checkmark \checkmark 14.47 \pm 0.22
    \checkmark \checkmark \checkmark 14.18 \pm 0.32
    CACD \checkmark \checkmark 14.07 \pm 0.25
    \checkmark \checkmark 14.08 \pm 0.32
    \checkmark \checkmark 14.04 \pm 0.21
    \checkmark \checkmark \checkmark 13.73 \pm 0.24

     | Show Table
    DownLoad: CSV

    In this paper, we proposed a DCLMP, in which both the cross-view discriminative information and the spatial structural information of training data is taken into consideration to enhance subsequent decision making. To pursue closed-form solutions, we remodeled the objective of DCLMP to nonlinear geodesic space and consequently achieved its convex formulation (C-DCLMP). Finally, we evaluated the proposed methods and demonstrated their superiority on various benchmark and real face datasets. In the future, we will consider exploring the latent information of the unlabeled data from the feature and label level, and study how to combine related advanced multi-view learning methods to reduce the computational consumption of the model and further improve the generalization ability of the model in various scenarios.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by the National Natural Science Foundation of China under Grant 62176128, the Open Projects Program of State Key Laboratory for Novel Software Technology of Nanjing University under Grant KFKT2022B06, the Fundamental Research Funds for the Central Universities No. NJ2022028, the Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD) fund, as well as the Qing Lan Project of the Jiangsu Province.



    [1] T. Myers, Surface tension driven thin film flows, Mech. Thin Film Coat., 1996,259–268. https://doi.org/10.1142/978981450391400_23 doi: 10.1142/978981450391400_23
    [2] T. Myers, Thin films with high surface tension, SIAM Rev., 40 (1998), 441–462. https://doi.org/10.1137/S003614459529284X doi: 10.1137/S003614459529284X
    [3] R. Griffiths, The dynamics of lava flows, Annu. Rev. Fluid Mech., 32 (2000), 477–518. https://doi.org/10.1146/annurev.fluid.32.1.477 doi: 10.1146/annurev.fluid.32.1.477
    [4] J. Grotberg, Respiratory fluid mechanics and transport processes, Annu. Rev. Biomed. Eng., 3 (2001), 421–457. https://doi.org/10.1146/annurev.bioeng.3.1.421 doi: 10.1146/annurev.bioeng.3.1.421
    [5] R. Braun, Dynamics of the tear film, Annu. Rev. Fluid Mech., 44 (2012), 267–297. https://doi.org/10.1146/annurev-fluid-120710-101042 doi: 10.1146/annurev-fluid-120710-101042
    [6] R. Craster, O. Matar, Dynamics and stability of thin liquid films, Rev. Mod. Phys., 81 (2009), 1131–1198. https://link.aps.org/doi/10.1103/RevModPhys.81.1131 doi: 10.1103/RevModPhys.81.1131
    [7] A. Bertozzi, The mathematics of moving contact lines in thin liquid films, Notices Amer. Math. Soc., 45 (1998), 689–697.
    [8] A. Bertozzi, M. Brenner, Linear stability and transient growth in driven contact lines, Phys. Fluids, 9 (1997), 530–539. https://doi.org/10.1063/1.869217 doi: 10.1063/1.869217
    [9] J. Goddard, S. Naire, The spreading and stability of a surfactant-laden drop on an inclined prewetted substrate, J. Fluid Mech., 772 (2015), 535–568. https://doi.org/10.1017/jfm.2015.212 doi: 10.1017/jfm.2015.212
    [10] L. Kondic, Instabilities in gravity driven flow of thin fluid films, SIAM Rev., 45 (2003), 95–115. https://doi.org/10.1137/S003614450240135 doi: 10.1137/S003614450240135
    [11] S. Troian, E. Herbolzheimer, S. Safran, Model for the fingering instability of the spreading surfactant drops, Phys. Rev. Lett., 65 (1990), 333–336. https://link.aps.org/doi/10.1103/PhysRevLett.65.333 doi: 10.1103/PhysRevLett.65.333
    [12] F. B. Carro, Viscous flows, fourth order nonlinear degenerate parabolic equations and singular elliptic problems, Free Bound. Probl. Theory Appl., 323 (1995), 40–56.
    [13] L. Kondic, J. Diez, Pattern formation in the flow of thin films down an incline: Constant flux configuration, Phys. Fluids, 13 (2001), 3168–3184. https://doi.org/10.1063/1.1409965 doi: 10.1063/1.1409965
    [14] J. Diez, L. Kondic, Computing three-dimensional thin film flows including contact lines, J. Comput. Phys., 183 (2002), 274–306. https://doi.org/10.1006/jcph.2002.7197 doi: 10.1006/jcph.2002.7197
    [15] M. Warner, R. Craster, O. Matar, Fingering phenomena created by a soluble surfactant deposition on a thin liquid film, Phys. Fluids, 16 (2004), 2933–2951. https://doi.org/10.1063/1.1763408 doi: 10.1063/1.1763408
    [16] B. Edmonstone, O. Matar, R. Craster, Flow of surfactant-laden thin films down an inclined plane, J. Eng. Math., 50 (2004), 141–156. https://doi.org/10.1007/s10665-004-3689-6 doi: 10.1007/s10665-004-3689-6
    [17] B. Edmonstone, R. Craster, O. Matar, Surfactant-induced fingering phenomena beyond the critical micelle concentration, J. Fluid Mech., 564 (2006), 105–138. https://doi.org/10.1017/S0022112006001352 doi: 10.1017/S0022112006001352
    [18] R. Levy, M. Shearer, The motion of a thin liquid film driven by surfactant and gravity, SIAM J. Appl. Math., 66 (2006), 1588–1609. https://doi.org/10.1137/050637030 doi: 10.1137/050637030
    [19] R. Levy, M. Shearer, T. Witelski, Gravity-driven thin liquid films with insoluble surfactant: smooth traveling waves, Eur. J. Appl. Math., 18 (2007), 679–708. https://doi:10.1017/S0956792507007218 doi: 10.1017/S0956792507007218
    [20] A. Mavromoustaki, O. Matar, R. Craster, Dynamics of a climbing surfactant-laden film Ⅱ: Stability, J. Colloid Interf. Sci., 371 (2012), 121–135. https://doi.org/10.1016/j.jcis.2011.11.033 doi: 10.1016/j.jcis.2011.11.033
    [21] J. Barrett, J. Blowey, H. Garcke, Finite element approximation of a fourth order degenerate parabolic equation, Numer. Math., 80 (1998), 525–556. https://doi.org/10.1007/s002110050377 doi: 10.1007/s002110050377
    [22] G. Grun, M. Rumpf, Nonnegativity preserving convergent schemes for the thin film equation, Numer. Math., 87 (2000), 113–152. https://doi.org/10.1007/s002110000197 doi: 10.1007/s002110000197
    [23] A. Heryudono, R. Braun, T. Driscoll, K. Maki, L. Cook, P. King-Smith, Single-equation models for the tear film in a blink cycle: realistic lid motion, Math. Med. Biol., 4 (2007), 347–377. https://doi.org/10.1093/imammb/dqm004 doi: 10.1093/imammb/dqm004
    [24] M. Warner, R. Craster, O. Matar, Fingering phenomena associated with insoluble surfactant spreading on thin liquid films, J. Fluid Mech., 510 (2004), 169–200. https://doi.org/10.1017/S0022112004009437 doi: 10.1017/S0022112004009437
    [25] P. Keast, P. Muir, Algorithm 688: EPDCOL: A more efficient PDECOL code, ACM T. Math. Software, 17 (1991), 153–166. https://doi.org/10.1145/108556.108558 doi: 10.1145/108556.108558
    [26] J. Verwer, J. Blom, J. Sanz-Serna, An adaptive moving grid method for one-dimensional systems of partial differential equations, J. Comput. Phys., 82 (1989), 454–486. https://doi.org/10.1016/0021-9991(89)90058-2 doi: 10.1016/0021-9991(89)90058-2
    [27] R. Furzeland, J. Verwer, P. Zegeling, A numerical study of three moving grid methods for one-dimensional partial differential equations which are based on the method of lines, J. Comput. Phys., 89 (1990), 349–388. https://doi.org/10.1016/0021-9991(90)90148-T doi: 10.1016/0021-9991(90)90148-T
    [28] J. Blom, P. Zegeling, Algorithm 731: A moving-grid interface for systems of one-dimensional partial differential equations, ACM T. Math. Software, 20 (1994), 194–214. https://doi.org/10.1145/178365.178391 doi: 10.1145/178365.178391
    [29] P. Sun, R. Russell, J. Xu, A new adaptive local mesh refinement algorithm and its application on fourth order thin film flow problem, J. Comput. Phys., 224 (2007), 1021–1048. https://doi.org/10.1016/j.jcp.2006.11.005 doi: 10.1016/j.jcp.2006.11.005
    [30] Y. Li, D. Jeong, J. Kim, Adaptive mesh refinement for simulation of thin film flows, Meccanica, 49 (2013), 239–252. https://doi.org/10.1007/s11012-013-9788-6 doi: 10.1007/s11012-013-9788-6
    [31] Y. Lee, H. Thompson, P. Gaskell, An efficient adaptive multigrid algorithm for predicting thin film flow on surfaces containing localised topographic features, Comput. Fluids, 37 (2007), 838–855. https://doi.org/10.1016/j.compfluid.2006.08.006 doi: 10.1016/j.compfluid.2006.08.006
    [32] W. Huang, R. Russell, Adaptive moving mesh methods, Berlin: Springer, 2011. https://doi.org/10.1007/978-1-4419-7916-2
    [33] C. Budd, W. Huang, R. Russell, Adaptivity with moving grids, Acta Numer., 18 (2009), 111–241. https://doi:10.1017/S0962492906400015 doi: 10.1017/S0962492906400015
    [34] E. Walsh, Moving mesh methods for problems in meteorology, Ph.D. thesis, University of Bath, 2010.
    [35] B. Edmonstone, O. Matar, and R. Craster. Surfactant-induced fingering phenomena in thin film flow down an inclined plane, Phys. D Nonlinear Phenom., 209 (2005), 62–79. https://doi.org/10.1016/j.physd.2005.06.014 doi: 10.1016/j.physd.2005.06.014
    [36] L. Kondic, Instabilities in gravity driven flow of thin fliud films, SIAM Rev., 45 (2003), 95–115.
    [37] P. Brown, C. Hindmarsh, R. Petzold, Using Krylov methods in the solution of large-scale differential-algebraic systems, SIAM J. Sci. Comput., 15 (1994), 1467–1488. https://doi.org/10.1137/0915088 doi: 10.1137/0915088
    [38] C. Budd, J. Williams, Moving mesh generation using the parabolic Monge-Ampere equation, SIAM J. Sci. Comput., 31 (2009), 3438–3465. https://doi.org/10.1137/080716773 doi: 10.1137/080716773
    [39] C. Budd, M. Cullen, E. Walsh, Monge-Ampere based moving mesh methods for numerical weather prediction, with applications to the Eady problem, J. Comput. Phys., 236 (2013), 247–270. https://doi.org/10.1016/j.jcp.2012.11.014 doi: 10.1016/j.jcp.2012.11.014
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1561) PDF downloads(88) Cited by(9)

Figures and Tables

Figures(9)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog