
Chronic pain research, with a specific focus on the brain-derived neurotrophic factor (BDNF), has made impressive progress in the past decade, as evident in the improved research quality and increased publications. To better understand this evolving landscape, a quantitative approach is needed. The main aim of this study is to identify the hotspots and trends of BDNF in chronic pain research. We screened relevant publications from 2013 to 2022 in the Scopus database using specific search subject terms. A total of 401 documents were selected for further analysis. We utilized several tools, including Microsoft Excel, Harzing's Publish or Perish, and VOSViewer, to perform a frequency analysis, citation metrics, and visualization, respectively. Key indicators that were examined included publication growth, keyword analyses, topmost influential articles and journals, networking by countries and co-citation of cited references. Notably, there was a persistent publication growth between 2015 and 2021. “Neuropathic pain” emerged as a prominent keyword in 2018, alongside “microglia” and “depression”. The journal Pain® was the most impactful journal that published BDNF and chronic pain research, while the most influential publications came from open-access reviews and original articles. China was the leading contributor, followed by the United States (US), and maintained a leadership position in the total number of publications and collaborations. In conclusion, this study provides a comprehensive list of the most influential publications on BDNF in chronic pain research, thereby aiding in the understanding of academic concerns, research hotspots, and global trends in this specialized field.
Citation: Che Aishah Nazariah Ismail, Rahimah Zakaria, Khairunnuur Fairuz Azman, Nazlahshaniza Shafin, Noor Azlina Abu Bakar. Brain-derived neurotrophic factor (BDNF) in chronic pain research: A decade of bibliometric analysis and network visualization[J]. AIMS Neuroscience, 2024, 11(1): 1-24. doi: 10.3934/Neuroscience.2024001
[1] | Jingqian Xu, Ma Zhu, Baojun Qi, Jiangshan Li, Chunfang Yang . AENet: attention efficient network for cross-view image geo-localization. Electronic Research Archive, 2023, 31(7): 4119-4138. doi: 10.3934/era.2023210 |
[2] | Rui Wang, Haiqiang Li, Chen Hu, Xiao-Jun Wu, Yingfang Bao . Deep Grassmannian multiview subspace clustering with contrastive learning. Electronic Research Archive, 2024, 32(9): 5424-5450. doi: 10.3934/era.2024252 |
[3] | Shuaiqun Wang, Huiqiu Chen, Wei Kong, Xinqi Wu, Yafei Qian, Kai Wei . A modified FGL sparse canonical correlation analysis for the identification of Alzheimer's disease biomarkers. Electronic Research Archive, 2023, 31(2): 882-903. doi: 10.3934/era.2023044 |
[4] | Zhongnian Li, Jiayu Wang, Qingcong Geng, Xinzheng Xu . Group-based siamese self-supervised learning. Electronic Research Archive, 2024, 32(8): 4913-4925. doi: 10.3934/era.2024226 |
[5] | Li Sun, Bing Song . Feature adaptive multi-view hash for image search. Electronic Research Archive, 2023, 31(9): 5845-5865. doi: 10.3934/era.2023297 |
[6] | Qianpeng Xiao, Changbin Shao, Sen Xu, Xibei Yang, Hualong Yu . CCkEL: Compensation-based correlated k-labelsets for classifying imbalanced multi-label data. Electronic Research Archive, 2024, 32(5): 3038-3058. doi: 10.3934/era.2024139 |
[7] | Jie Zheng, Yijun Li . Machine learning model of tax arrears prediction based on knowledge graph. Electronic Research Archive, 2023, 31(7): 4057-4076. doi: 10.3934/era.2023206 |
[8] | Bojian Chen, Wenbin Wu, Zhezhou Li, Tengfei Han, Zhuolei Chen, Weihao Zhang . Attention-guided cross-modal multiple feature aggregation network for RGB-D salient object detection. Electronic Research Archive, 2024, 32(1): 643-669. doi: 10.3934/era.2024031 |
[9] | Hui Jiang, Di Wu, Xing Wei, Wenhao Jiang, Xiongbo Qing . Discriminator-free adversarial domain adaptation with information balance. Electronic Research Archive, 2025, 33(1): 210-230. doi: 10.3934/era.2025011 |
[10] | Jicheng Li, Beibei Liu, Hao-Tian Wu, Yongjian Hu, Chang-Tsun Li . Jointly learning and training: using style diversification to improve domain generalization for deepfake detection. Electronic Research Archive, 2024, 32(3): 1973-1997. doi: 10.3934/era.2024090 |
Chronic pain research, with a specific focus on the brain-derived neurotrophic factor (BDNF), has made impressive progress in the past decade, as evident in the improved research quality and increased publications. To better understand this evolving landscape, a quantitative approach is needed. The main aim of this study is to identify the hotspots and trends of BDNF in chronic pain research. We screened relevant publications from 2013 to 2022 in the Scopus database using specific search subject terms. A total of 401 documents were selected for further analysis. We utilized several tools, including Microsoft Excel, Harzing's Publish or Perish, and VOSViewer, to perform a frequency analysis, citation metrics, and visualization, respectively. Key indicators that were examined included publication growth, keyword analyses, topmost influential articles and journals, networking by countries and co-citation of cited references. Notably, there was a persistent publication growth between 2015 and 2021. “Neuropathic pain” emerged as a prominent keyword in 2018, alongside “microglia” and “depression”. The journal Pain® was the most impactful journal that published BDNF and chronic pain research, while the most influential publications came from open-access reviews and original articles. China was the leading contributor, followed by the United States (US), and maintained a leadership position in the total number of publications and collaborations. In conclusion, this study provides a comprehensive list of the most influential publications on BDNF in chronic pain research, thereby aiding in the understanding of academic concerns, research hotspots, and global trends in this specialized field.
Correlation analysis deals with data with cross-view feature representations. To handle such tasks, many correlation learning approaches have been proposed, among which canonical correlation analysis (CCA) [1,2,3,4,5] is a representative method and has been widely employed [6,7,8,9,10,11]. To be specific, given training data with two or more feature-view representations, the traditional CCA method comes to seek a projection vector for each of the views while maximizing the cross-view correlations. After the data are mapped along the projection directions, subsequent cross-view decisions can be made [4]. Although CCA yields good results, a performance room is left since the data labels are not incorporated in learning.
When class labels information is also provided or available, CCA can be remodeled to its discriminant form by making use of the labels. To this end, Sun et al. [12] proposed a discriminative variant of CCA (i.e., DCCA) by enlarging distances between dissimilar samples while reducing those of similar samples. Subsequently, Peng et al. [13] built a locally-discriminative version of CCA (i.e., LDCCA) based on the assumption that the data distributions follow low-dimensional manifold embedding. Besides, Su et al. [14] established a multi-patch embedding CCA (MPECCA) by developing multiple metrics rather than a single one to model within-class scatters. Afterwards, Sun et al. [15] built a generalized framework for CCA (GCCA). Ji et al. [16] remodeled the scatter matrices by deconstructing them into several fractional-order components and achieved performance improvements.
In addition to directly constructing a label-exploited version of CCA, the supervised labels can be utilized by embedding them as regularization terms. Along this direction, Zhou et al. [17] presented CECCA by embedding LDA-guided [18] feature combinations into the objective function of CCA. Furthermore, Zhao et al. [19] constructed HSL-CCA by reducing inter-class scatters within their local neighborhoods. Later, Haghighat et al. [20] proposed the DCA model by deconstructing the inter-class scatter matrix guided by class labels. Previous variants of CCA were designed to cater for two-view data and cannot be used directly to handle multi-view scenarios. To overcome this shortcoming, many CCA methods have been proposed, such as GCA [21], MULDA [22] and FMDA [23].
Although the aforementioned methods have achieved successful performances of varying extent, unfortunately, the objective functions of nearly all of them are not convex [14,24,25]. Although CDCA [26] yields closed-form solutions and better results than the previous methods.
To overcome these shortcomings, we firstly design a discriminative correlation learning with manifold preservation, coined as DCLMP, in which, not only the cross-view discriminative information but also the spatial structural information of training data is taken into account to enhance subsequent decision making. To pursue closed-form solutions, we remodel the objective of DCLMP from the Euclidean space to a geodesic space. In this way, we obtain a convex formulation of DCLMP (C-DCLMP). Finally, we comprehensively evaluated the proposed methods and demonstrated their superiority on both toy and real data sets. To summarize, our contributions are three-fold as follows:
1. A DCLMP is constructed by modelling both cross-view discriminative information and spatial structural information of training data.
2. The objective function of DCLMP is remodelled to obtain its convex formulation (C-DCLMP).
3. The proposed methods are evaluated with extensive experimental comparisons.
This paper is organized as follows. Section 2 reviews related theories of CCA. Section 3 presents models and their solving algorithms. Then, experiments and comparisons are reported to evaluate the methods in Section 4. Section 5 concludes and provides future directions.
In this section, we briefly review the works on multi-view learning, which aims to study how to establish constraints or dependencies between views by modeling and discovering the interrelations between views. There exist studies about multi-view learning. Tang et al. [27] proposed a multi-view feature selection method named CvLP-DCL, which divided the label space into a consensus part and a domain-specific part and explored the latent information between different views in the label space. Additionally, CvLP-DCL explored how to combine cross-domain similarity graph learning with matrix-induced regularization to boost the performance of the model. Tang et al. [28] also proposed UoMvSc for multi-view learning, which mined the value of view-specific graphs and embedding matrices by combining spectral clustering with k-means clustering. In addition, Wang et al.[29] proposed an effective framework for multi-view learning named E2OMVC, which constructed the latent feature representation based on anchor graphs and the clustering indicator matrix about multi-view data to obtain better clustering results.
We briefly review related theories of CCA [1,2]. Given two-view feature representations of training data, CCA seeks two projection matrices respectively for the two views, while preserving the cross-view correlations. To be specific, let X=[x1,...,xN]∈Rp×N and Y=[y1,...,yN]∈Rq×N be two view representations of N training samples, with xi and yi denoting normalized representations of the ith sample. Besides, let Wx∈Rp×r and Wy∈Rq×r denote the projection matrices mapping the training data from individual view spaces into a r-dimensional common space. Then, the correlation between WTxxi and WTyyi should be maximized. Consequently, the formal objective of CCA can be formulated as
max{Wx,Wy}WTxCxyWy√WTxCxxWxWTyCyyWy, | (2.1) |
where Cxx=1N∑Ni=1(xi−¯x)(xi−¯x)T, Cyy=1N∑Ni=1(yi−¯y)(yi−¯y)T, and Cxy=1N∑Ni=1(xi−¯x)(yi−¯y)T, where ¯x=1N∑Ni=1xi and ¯y=1N∑Ni=1yi respectively denote the sample means of the two views. The numerator describes the sample correlation in the projected space, while the denominator limits the scatter for each view. Typically, Eq (2.1) is converted to a generalized eigenvalue problem as
(XYTYXT)(WxWy)=λ(XXTYYT)(WxWy). | (2.2) |
Then, (WxWy) can be achieved by computing the largest r eigenvectors of
(XXTYYT)−1(XYTYXT). |
After Wx and Wy are obtained, xi and yi can be concatenated as WTxxi+WTyyi=(WxWy)T(xiyi). With the concatenated feature representations are achieved, subsequent classification or regression decisions can be made.
The most classic work of discriminative CCA is DCCA [12], which is shown as follows:
maxwx,wy(wTxCwwy−η⋅wTxCbwy) s.t. wTxXXTwx=1,wTyYYTwy=1 | (2.3) |
It is easy to find that DCCA is discriminative because DCCA needs instance labels to calculate the relationship between each class. Similar to DCCA, Peng et al. [13] proposed LDCCA which is shown as follows:
maxwx,wywTxCxywy√(wTx˜Cxxwx)(wTyCyywy) s.t. wTxXXTwx=1,wTyYYTwy=1 | (2.4) |
where ˜Cxy=Cw−ηCb⋅Cw. Compared with DCCA, LDCCA consider the local correlations of the within-class sets and the between-class sets. However, these methods do not consider the problem of multimodal recognition or feature level fusion. Haghighat et al. [20] proposed DCA which incorporates the class structure, i.e., memberships of the samples in classes, into the correlation analysis. Additionally, Su et al. [14] proposed MPECCA for multi-view feature learning, which is shown as follows:
maxu,v,w(χ)j,w(y)ruT(N∑i=1M∑j=1M∑r=1(w(x)jw(y)r)XS(x)ijLiS(y)TirYT)v s.t. uTSwxu=1,vTSwyv=1M∑j=1w(x)j=1,w(x)j⩾0M∑r=1w(y)r=1,w(y)r⩾0 | (2.5) |
where u and v means correlation projection matrices. Considering combining LDA and CCA, CECCA was proposed [17]. The optimization objective of CECCA was shown as follows:
maxwx,wywTxX(I+2A)YTwy+wTxXATTwx+wTyYAXTwy s. t. wTxXXTwx+wTyYYTwy=2 | (2.6) |
where A=2U−I, I means Identity matrix. On the basis of CCA, CECCA combined with discriminant analysis to realize the joint optimization of correlation and discriminant of combined features, which makes the extracted features more suitable for classification. However, these methods cannot achieve the closed form solution. CDCA [26] combined GMML and discrimative CCA and then achieve the closed form solution in Riemannian manifold space, the optimization objective was shown as follows:
minA≻0γtr(AC)+(1−γ)(tr(ASZ)+tr(A−1DZ))=tr(A(γC+(1−γ)SZ))+tr(A−1(1−γ)DZ) | (2.7) |
From Eq (2.7) and CDCA [26] we can find with the help of discrimative part and closed form solution, the multi-view learning will easily get the the global optimality of solutions and achieve a good result.
CCA suffer from three main problems: (1) the similarity and dissimilarity across views are not modeled; (2) although the data labels can be exploited by imposing supervised constraints, their objective functions are nonconvex; (3) the cross-view correlations are modeled in Euclidean space through RKHS kernel transformation [30,31] whose discriminating ability is obviously limited.
We present a novel cross-view learning model, called DCLMP, in which not only the with-class and between-class scatters are characterized, but also the similarity and dissimilarity of the training data across views are modelled for utilization. Although many preferable characteristics are incorporated in DCLMP, it still suffers from non-convexity for its objective function. To facilitate pursuing global optimal solutions, we further remodel DCLMP to the Riemannian manifold space to make the objective function convex. The proposed method is named as C-DCLMP.
Assume we are given N training instances sampled from K classes with two views of feature representations, i.e., X=[X1,X2,⋅⋅⋅,XK]∈Rp×N with Xk=[xk1,xk2,⋅⋅⋅,xkNk] being Nk x-view instances from the k-th class and Y=[Y1,Y2,⋅⋅⋅,YK]∈Rq×N with Yk=[yk1,yk2,⋅⋅⋅,ykNk] being Nk y-view instances from the k-th class, where yk1 and xk1 stand for two view representations from the same instance. In order to concatenate them for subsequent classification, we denote U∈Rp×r and V∈Rq×r as projection matrices for the two views to transform their representations to a r-dimensional common space.
To perform cross-view learning while exploring supervision knowledge in terms of similar and dissimilar relationships among instances in each view and across the views, as well as sample distribution manifolds, we construct DCLMP. To this end, we should construct the model by taking into account the following aspects: 1) distances between similar instances from the same class should be reduced while those among dissimilar from different classes should be enlarged, in levels of intra-view and inter-view; 2) manifold structures embedded in similar and dissimilar instances should be preserved. These modelling considerations are intuitively demonstrated in Figure 1.
Along this line, we construct the objective function of DCLMP as follows:
min{U,V}1NN∑i=11NN∑j=1‖UTxi−VTyj‖22⋅Lij+λ1KK∑k=11NkNk∑i=11knkn∑j=1{‖UTxki−UTxkj‖2F⋅Swxij+‖VTyki−VTykj‖2F⋅Swyij}−λ2KK∑k=1∑h≠k1NkNk∑i=11knkn∑j=1{‖UTxki−UTxhj‖2F⋅Sbxij+‖VTyki−VTyhj‖2F⋅Sbyij} | (3.1) |
where U and V denote the projection matrices in the r-dimensional common space of two views and kn denotes the k-nearest neighbors of an instance. L is the discriminative weighting matrix. Swx and Swy stand for the within-class manifold weighting matrices of two different views of feature representations, and Sbx and Sby stand for the between-class manifold weighting matrices of two different views of feature representations. Their elements are defined as follows
Lij={1Nkxi and yj are from the same class0xi and yj are from different classes | (3.2) |
Swxij={exp(−‖xki−xkj‖2σ2x)xkj∈KNNkn(xki)0xkj∉KNNkn(xki) | (3.3) |
Swyij={exp(−‖yki−ykj‖2σ2y)ykj∈KNNkn(yki)0ykj∉KNNkn(yki) | (3.4) |
Sbxij={exp(−‖xki−xhj‖2σ2x)xhj∈KNNkn(xki)0xhj∉KNNkn(xki) | (3.5) |
Sbyij={exp(−‖yki−yhj‖2σ2y)yhj∈KNNkn(yki)0yhj∉KNNkn(yki) | (3.6) |
where KNNkn denotes the kn-nearest neighbors of an instance. σx and σy stand for width coefficients to normalize the weights.
In Eq (3.1), the first part characterizes the cross-view similarity and dissimilarity discriminations, the second part preserves the manifold relationships within each class scatters, while the third part magnifies the distribution margins for a dissimilar pair of instances. In this way, both the discriminative information and the manifold distributions can be modelled in a joint objective function.
For convenience of solving Eq (3.1), we transform it as the following concise form
minA≻0tr(A(C+λ1Sz−λ2Dz)) | (3.7) |
with
A=[UV][UV]T | (3.8) |
C=[1p×p0q×p]XMLXT[0p×q,1q×q]+[0p×q1q×q]YMLYT[0q×p,1q×q]−[1p×p0q×p]XLYT[0q×p,1q×q]−[0p×q1q×q]YLXT[1p×p,0p×q] | (3.9) |
Sz=K∑k=1{[1p×p0q×p]Xk(Mwx+MwxT−Swx−SwxT)XkT[1p×p,0p×q]+[0p×q1q×q]Yk(Mwy+MwyT−Swy−SwyT)YkT[0q×p,1q×q]} | (3.10) |
Dz={[1p×p0q×p]X(Mbx+MbxT−Sbx−SbxT)XT[1p×p,0p×q]+[0p×q1q×q]Y(Mby+MbyT−Sby−SbyT)YT[0q×p,1q×q]} | (3.11) |
where Mwxii=∑Nj=1Swxij, Mwyii=∑Nj=1Swyij, Mbxii=∑Nj=1Sbxij, Mbyii=∑Nj=1Sbyij, and MLii=∑Nj=1Lij.
We let J record the objective function value of Eq (3.7) and introduce QTQ=I to replace A and Λ to rewrite Eq (3.7) as
J{Q,Λ}=tr(QT(C+λ1Sz−λ2Dz)Q)−tr(Λ(QTQ−I)). | (3.12) |
Calculating the partial derivative of J{Q,Λ} with regard to Q and making it to zero yields
(C+λ1Sz−λ2Dz)Q=QΛ, | (3.13) |
The projection matrix Q can be obtained by calculating a required number of smallest eigenvectors of C+λ1Sz−λ2Dz. Finally, we can recover A=QQT. Then, U and V can be obtained through Eq (3.8).
We find that such a objective function may be not convex [32,33]. The separability of nonlinear data patterns in the geodesic space can be significantly improved and thus benefits their subsequent recognitions. Referring to minA≻0tr(A−1∙)⇔maxA≻0tr(A∙) [34], we reformulate DCLMP in (3.7) equivalently as
minA≻0tr(AC+λ1ASz+λ2A−1Dz)⇔minA≻0tr(AC)+λ1tr(ASz)+λ2tr(A−1Dz), | (3.14) |
Minimizing the third term λ1tr(A−1Dz) is equivalent to minimizing −λ1tr(ADz) of Eq (3.7). Although the last term is nonlinear, it is defined in the convex cone space [35] and thus is still convex. As a result, Eq (3.14) is entirely convex regarding A. It enjoys closed-form solution [36,37,38]. To distinguish Eq (3.14) from DCLMP, we call it C-DCLMP.
For convenience of deriving the closed-form solution, we reformulate Eq (3.14) as
minA≻0γtr(A−1Dz)+(1−γ)(tr(ASz)+αtr(AC)), | (3.15) |
where we set γ∈(0,1) [34]. Let J(A):=γtr(A−1Dz)+(1−γ)(tr(ASz)+αtr(AC)).
(1−γ)A(Sz+αC)A=γDz, | (3.16) |
whose solution is the midpoint of the geodesic jointing ((1−γ)(Sz+αC))−1 and γDz, that is
A=((1−γ)(Sz+αC))−1♯1/2(γDz), | (3.17) |
(⋅)♯1/2(⋅) denotes the midpoint. We extend the geodesic mean solution (3.17) to the geodesic space by replacing (⋅)♯1/2(⋅) with (⋅)♯t(⋅), 0⩽t⩽1.
We add a regularizer with prior knowledge to (3.15). Here, we incorporate symmetrized LogDet divergence and consequently (3.15) becomes
minA≻0γtr(A−1Dz)+(1−γ)(tr(ASz)+αtr(AC))+λDsld(A,A0), | (3.18) |
Dsld(A,A0)=tr(AA−10)+tr(A−1A0)−2(p+q), | (3.19) |
where (p+q) is the dimension of the data. Fortunately, complying with the definition of geometric mean [36], Eq (3.18) is still convex. We let G(A):=γtr(A−1Dz)+(1−γ)(tr(ASz)+αtr(AC))+λDsld(A,A0). Then we set the gradient of G(A) regarding to A to zero and obtain the equation as
(1−γ)A(Sz+αC)A+λAA−10A=γDz+λA0, | (3.20) |
we calculate the closed-form solution as
A=((1−γ)(Sz+αC)+λA−10)−1♯t(γDz+λA0). | (3.21) |
More precisely, according to the definition of (⋅)♯t(⋅), namely the geodesic mean jointing two matrices, we can directly expand the final solution of our C-DCLMP in Eq (3.18) as
A=((1−γ)(Sz+αC)+λA−10)−1♯t(γDz+λA0)=((1−γ)(Sz+αC)+λA−10)1/2(((1−γ)(Sz+αC)+λA−10)−1/2(γDz+λA0)((1−γ)(Sz+αC)+λA−10)−1/2)t((1−γ)(Sz+αC)+λA−10)1/2. | (3.22) |
where we set A0 to be a (p+q)-order identity matrix Ip+q. When obtaining A, U and V are recovered.
Its concatenated representation can be generated by UTx+VTy=[UV]T[xy] and the classification decision using a classifier (e.g., KNN) can be made on this fused representation.
To comprehensively evaluate the proposed methods, we first performed comparative experiments on several benchmark and real face datasets. Besides, we also performed sensibility analysis on the model parameters.
For evaluation and comparisons, CCA [1], DCCA [12], MPECCA [14], CECCA [17], DCA [20] and CDCA [26] were implemented. All hyper-parameters were cross-validated in the range of [0, 0.1, ..., 1] for t and γ, and [1e-7, 1e-6, ..., 1e3] for α and λ. For concatenated cross-view representations, a 5-nearest-neighbors classifier was employed for classification. Additionally, recognition accuracy (%, higher is better) and mean absolute errors (MAE, lower is better) were adopted as performance measures.
We first performed experiments on several widely used non-face multi-view datasets, i.e., MFD [39] and USPS [40], AWA [41] and ADNI [42]. We report the results in Table 1.
Dataset | View Represenations | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) | |
MFD | fac | fou | 80.22 ± 0.9 | 80.00 ± 0.2 | 90.64 ± 1.3 | 95.15 ± 0.9 | 96.46 ± 2.4 | 98.11 ± 0.3 | 94.49 ± 1.7 | 98.03 ± 0.3 |
fac | kar | 92.12 ± 0.5 | 90.10 ± 0.8 | 95.39 ± 0.6 | 95.33 ± 0.7 | 96.52 ± 1.2 | 97.06 ± 0.4 | 96.86 ± 0.5 | 97.93 ± 0.6 | |
fac | mor | 78.22 ± 0.8 | 63.22 ± 4.3 | 72.32 ± 2.4 | 95.22 ± 0.9 | 94.23 ± 1.0 | 98.13 ± 0.3 | 90.97 ± 2.5 | 97.63 ± 0.3 | |
fac | pix | 83.02 ± 1.2 | 90.20 ± 0.5 | 94.65 ± 0.5 | 65.60 ± 1.1 | 93.67 ± 2.9 | 97.52 ± 0.4 | 97.45 ± 0.5 | 97.21 ± 0.4 | |
fac | zer | 84.00 ± 0.6 | 71.50 ± 2.2 | 93.79 ± 0.7 | 96.00 ± 0.6 | 97.04 ± 0.6 | 97.03 ± 0.4 | 95.98 ± 0.3 | 97.75 ± 0.4 | |
fou | kar | 90.11 ± 1.0 | 75.42 ± 5.6 | 93.98 ± 0.4 | 89.12 ± 4.3 | 96.90 ± 0.5 | 97.19 ± 0.6 | 97.45 ± 0.4 | 97.45 ± 0.3 | |
fou | mor | 70.22 ± 0.4 | 55.82 ± 4.6 | 60.62 ± 1.6 | 82.30 ± 0.9 | 78.25 ± 0.6 | 83.81 ± 0.7 | 82.09 ± 1.0 | 84.80 ± 0.6 | |
fou | pix | 68.44 ± 0.4 | 76.10 ± 4.7 | 78.24 ± 1.1 | 90.41 ± 3.2 | 76.28 ± 1.3 | 96.11 ± 0.5 | 97.62 ± 0.4 | 97.74 ± 0.3 | |
fou | zer | 74.10 ± 0.9 | 62.80 ± 4.1 | 79.38 ± 1.2 | 79.53 ± 4.5 | 83.16 ± 1.4 | 85.98 ± 0.9 | 85.33 ± 1.1 | 86.56 ± 1.0 | |
kar | mor | 64.09 ± 0.6 | 82.00 ± 1.6 | 72.92 ± 2.7 | 91.95 ± 2.8 | 91.89 ± 0.6 | 97.28 ± 0.5 | 96.83 ± 0.5 | 97.14 ± 0.4 | |
kar | pix | 88.37 ± 0.9 | 88.85 ± 0.8 | 95.07 ± 0.6 | 92.59 ± 2.0 | 95.98 ± 0.3 | 94.68 ± 0.5 | 97.54 ± 0.4 | 97.31 ± 0.5 | |
kar | zer | 90.77 ± 1.0 | 75.97 ± 2.8 | 94.17 ± 0.6 | 88.47 ± 2.9 | 93.57 ± 0.9 | 96.69 ± 0.4 | 96.98 ± 0.4 | 97.42 ± 0.4 | |
mor | pix | 68.66 ± 1.5 | 82.01 ± 2.1 | 67.21 ± 2.3 | 93.04 ± 0.7 | 90.08 ± 1.0 | 96.89 ± 0.4 | 97.20 ± 0.5 | 97.19 ± 0.4 | |
mor | zer | 73.22 ± 0.6 | 50.35 ± 1.8 | 60.95 ± 1.4 | 84.55 ± 0.9 | 80.59 ± 0.9 | 84.19 ± 0.8 | 81.75 ± 1.1 | 84.29 ± 0.7 | |
pix | zer | 82.46 ± 0.6 | 71.16 ± 2.8 | 82.81 ± 1.2 | 91.67 ± 2.1 | 91.81 ± 1.2 | 96.30 ± 0.5 | 97.35 ± 0.5 | 97.30 ± 0.5 | |
AWA | cq | lss | 73.11 ± 2.1 | 62.08 ± 0.3 | 76.19 ± 1.0 | 70.51 ± 1.3 | 77.53 ± 1.7 | 87.80 ± 2.8 | 89.03 ± 1.4 | 89.80 ± 1.2 |
cq | phog | 65.21 ± 1.4 | 73.10 ± 1.2 | 72.42 ± 1.6 | 70.15 ± 0.9 | 74.51 ± 2.1 | 85.58 ± 2.7 | 86.71 ± 2.3 | 86.81 ± 1.2 | |
cq | rgsift | 60.22 ± 1.3 | 61.40 ± 1.7 | 78.04 ± 1.3 | 82.87 ± 2.4 | 82.83 ± 1.4 | 90.99 ± 3.0 | 93.44 ± 0.6 | 94.34 ± 0.8 | |
cq | sift | 74.33 ± 1.3 | 61.28 ± 1.9 | 77.85 ± 1.4 | 83.19 ± 2.1 | 80.05 ± 1.7 | 81.59 ± 5.2 | 87.17 ± 0.8 | 90.68 ± 0.8 | |
cq | surf | 75.86 ± 1.7 | 69.30 ± 2.1 | 79.07 ± 0.8 | 73.55 ± 2.3 | 81.59 ± 1.5 | 93.58 ± 1.1 | 94.36 ± 1.0 | 95.35 ± 0.5 | |
lss | phog | 69.96 ± 1.7 | 59.72 ± 0.2 | 68.12 ± 1.2 | 64.86 ± 2.6 | 71.36 ± 1.4 | 80.48 ± 2.0 | 81.76 ± 1.1 | 81.62 ± 1.1 | |
lss | rgsift | 78.65 ± 0.9 | 63.21 ± 1.3 | 73.64 ± 1.0 | 78.28 ± 2.8 | 77.28 ± 1.4 | 87.38 ± 4.3 | 90.13 ± 0.7 | 89.95 ± 1.0 | |
lss | sift | 73.49 ± 1.0 | 65.72 ± 2.1 | 73.12 ± 1.4 | 66.21 ± 1.6 | 76.69 ± 1.7 | 81.56 ± 2.4 | 84.05 ± 0.9 | 84.07 ± 1.9 | |
lss | surf | 76.30 ± 1.4 | 65.33 ± 1.8 | 74.84 ± 1.6 | 79.06 ± 2.8 | 78.52 ± 1.3 | 89.81 ± 2.5 | 89.75 ± 0.8 | 91.12 ± 0.7 | |
phog | rgsift | 68.18 ± 1.1 | 48.38 ± 1.0 | 69.49 ± 2.3 | 77.37 ± 1.5 | 74.41 ± 1.5 | 82.76 ± 1.1 | 83.57 ± 1.6 | 83.68 ± 1.2 | |
phog | sift | 68.26 ± 1.1 | 70.24 ± 1.1 | 68.97 ± 1.3 | 63.16 ± 1.3 | 72.14 ± 1.5 | 80.50 ± 1.2 | 83.57 ± 1.1 | 83.75 ± 1.5 | |
phog | surf | 64.57 ± 1.4 | 56.94 ± 0.5 | 71.55 ± 1.4 | 75.68 ± 1.9 | 74.43 ± 2.1 | 84.97 ± 2.6 | 88.02 ± 1.8 | 87.34 ± 0.8 | |
rgsift | sift | 71.35 ± 1.3 | 58.56 ± 2.3 | 72.85 ± 1.1 | 75.28 ± 2.5 | 76.69 ± 1.7 | 90.76 ± 2.2 | 93.44 ± 0.4 | 93.79 ± 1.0 | |
rgsift | surf | 75.55 ± 1.3 | 67.22 ± 1.6 | 76.94 ± 2.2 | 84.10 ± 2.4 | 80.46 ± 1.7 | 93.25 ± 1.2 | 92.82 ± 0.8 | 93.66 ± 0.8 | |
sift | surf | 75.33 ± 1.3 | 63.36 ± 1.6 | 74.27 ± 1.2 | 82.14 ± 2.7 | 75.51 ± 1.1 | 90.07 ± 3.4 | 90.67 ± 1.0 | 91.69 ± 1.1 | |
ADNI | AV | FDG | 65.47 ± 1.8 | 73.28 ± 2.1 | 75.28 ± 2.6 | 76.25 ± 2.1 | 76.26 ± 2.5 | 79.59 ± 1.9 | 68.64 ± 3.3 | 80.86 ± 2.1 |
AV | VBM | 71.02 ± 2.4 | 71.02 ± 2.8 | 73.24 ± 3.1 | 63.47 ± 2.1 | 60.67 ± 2.7 | 81.59 ± 2.5 | 78.38 ± 2.5 | 80.70 ± 2.8 | |
FDG | VBM | 61.37 ± 1.2 | 65.28 ± 1.6 | 70.37 ± 2.6 | 64.05 ± 1.6 | 70.95 ± 1.8 | 80.12 ± 2.0 | 74.97 ± 2.9 | 80.21 ± 1.7 | |
USPS | left | right | 62.14 ± 0.6 | 80.11 ± 1.2 | 66.67 ± 0.9 | 63.96 ± 2.0 | 82.89 ± 1.9 | 89.76 ± 0.3 | 96.19 ± 0.7 | 96.03 ± 0.6 |
The proposed DCLMP method yielded the second-lowest estimation errors in most cases, slightly higher than the proposed C-DCLMP. The improvement achieved by C-DCLMP method is significant, especially on AWA and USPS datasets.
We also conducted age estimation experiments on AgeDB [43], CACD [44] and IMDB-WIKI [45]. These three databases are illustrated in Figure 2.
We extracted BIF [46] and HoG [47] feature vectors and reduced dimensions to 200 by PCA as two view representations. We randomly chose 50,100,150 samples for training. Also, we use VGG19 [48] and Resnet50 [45] to extract deep feature vectors from AgeDB, CACD and IMDB-WIKI databases. We report results in Tables 3, 5 and 6.
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 17.70 ± 0.5 | 17.78 ± 0.5 | 16.10 ± 0.4 | 15.93 ± 0.4 | 15.62 ± 0.5 | 15.48 ± 0.2 | 15.59 ± 0.1 | 15.16 ± 0.4 |
100 | 16.81 ± 0.5 | 17.23 ± 0.6 | 14.74 ± 0.5 | 14.79 ± 0.5 | 14.67 ± 0.4 | 14.57 ± 0.2 | 14.60 ± 0.2 | 14.13 ± 0.2 |
150 | 15.43 ± 0.5 | 16.25 ±0.6 | 13.83 ± 0.5 | 13.49 ± 0.4 | 13.43 ± 0.4 | 13.21 ± 0.2 | 13.48 ± 0.2 | 13.19 ± 0.3 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 16.17 ± 0.5 | 16.27 ± 0.46 | 15.42 ± 0.5 | 15.09 ± 0.5 | 14.78 ± 0.5 | 14.67 ± 0.3 | 14.75 ± 0.2 | 14.52 ± 0.2 |
100 | 15.86 ± 0.5 | 15.79 ± 0.6 | 14.89 ± 0.8 | 14.23 ± 0.4 | 14.09 ± 0.4 | 13.78 ± 0.3 | 14.07 ± 0.3 | 13.68 ± 0.2 |
150 | 15.09 ± 0.5 | 14.81 ± 0.3 | 13.97 ± 0.5 | 13.41 ± 0.6 | 13.34 ± 0.5 | 13.16 ± 0.3 | 13.46 ± 0.3 | 13.15 ± 0.3 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 16.28 ± 0.5 | 16.78 ± 0.4 | 15.79 ± 0.4 | 14.98 ± 0.4 | 14.38 ± 0.4 | 14.28 ± 0.4 | 14.10 ± 0.3 | 13.95 ± 0.3 |
100 | 15.45 ± 0.4 | 16.52 ± 0.5 | 15.04 ± 0.5 | 14.44 ± 0.4 | 13.99 ± 0.4 | 13.98 ± 0.3 | 13.85 ± 0.2 | 13.74 ± 0.2 |
150 | 15.20 ± 0.5 | 15.41 ± 0.5 | 14.79 ± 0.4 | 14.02 ± 0.5 | 13.73 ± 0.5 | 13.79 ± 0.2 | 13.67 ± 0.1 | 13.63 ± 0.3 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 16.07 ± 0.6 | 16.27 ± 0.4 | 15.35 ± 0.4 | 14.21 ± 0.6 | 13.39 ± 0.4 | 13.49 ± 0.3 | 13.52 ± 0.3 | 13.27 ± 0.2 |
100 | 15.69 ± 0.5 | 15.75 ± 0.3 | 14.65 ± 0.5 | 14.17 ± 0.5 | 13.28 ± 0.3 | 13.26 ± 0.3 | 13.24 ± 0.2 | 12.97 ± 0.4 |
150 | 15.22 ± 0.4 | 15.32 ± 0.4 | 14.45 ± 0.3 | 14.01 ± 0.6 | 13.01 ± 0.3 | 12.94 ± 0.3 | 12.91 ± 0.3 | 12.76 ± 0.4 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 14.29 ± 0.5 | 14.39 ± 0.5 | 13.49 ± 0.3 | 13.04 ± 0.5 | 12.26 ± 0.4 | 12.37 ± 0.3 | 11.84 ± 0.3 | 11.65 ± 0.3 |
100 | 13.97 ± 0.5 | 13.87 ± 0.4 | 12.79 ± 0.5 | 12.35 ± 0.3 | 11.96 ± 0.3 | 11.86 ± 0.3 | 11.53 ± 0.2 | 11.13 ± 0.3 |
150 | 13.43 ± 0.5 | 13.56 ± 0.5 | 12.48 ± 0.3 | 12.26 ± 0.4 | 11.66 ± 0.3 | 11.65 ± 0.3 | 11.45 ± 0.2 | 10.98 ± 0.3 |
The estimation errors (MAEs) of all the methods reduced monotonically. The age MAEs of DCLMP are the second lowest, demonstrating the solidness of our modelling cross-view discriminative knowledge and data manifold structures. We can also observe that C-DCLMP yields the lowest estimation errors, demonstrating its effectiveness and superiority.
For the proposed methods, we performed parameter analysis t, γ and λ involved in (3.21), respectively. Specifically, we conducted age estimation experiments on both AgeDB and CACD. The results are plotted in Figures 3–5.
Geometric weighting parameter t of C-DCLMP: We find some interesting observations from Figure 3. That is, with t increasing from 0 to 1, the estimation error descended first and then rose again. It shows that the similar manifolds within class and the inter-class data distributions are helpful in regularizing the model solution space.
Metric balance parameter γ of C-DCLMP: We can observe from Figure 4 that, the age estimation error (MAE) achieved the lowest values when 0.1<γ<0.9. This observation illustrates that preserving the data cross-view discriminative knowledge and the manifold distributions is useful and helps improve the estimation precision.
Metric prior parameter λ of C-DCLMP: Figure 5 shows that, with increased λ value, age estimation error descended to its lowest around λ = 1e-1 and then increased steeply. It demonstrates that incorporating moderate metric prior knowledge can regularize the model solution positively, but excess prior knowledge may dominate the entire data rule and mislead the training of the model.
For the proposed methods and the comparison methods mentioned above, we performed time complexity analysis. Specifically, we conducted age estimation experiments on both AgeDB and CACD by choosing 100 samples from each class for training while taking the rest for testing, respectively. We reported the averaged results in Table 7.
Dataset | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
AgeDB | 0.10 ± 0.12 | 0.06 ± 0.05 | 0.41 ± 0.03 | 0.18 ± 0.03 | 0.52 ± 0.02 | 0.11 ± 0.10 | 57.74 ± 0.71 | 54.94 ± 0.32 |
CACD | 0.09 ± 0.13 | 0.06 ± 0.10 | 0.38 ± 0.09 | 0.15 ± 0.04 | 0.47 ± 0.03 | 0.07 ± 0.01 | 30.86 ± 0.61 | 31.04 ± 0.68 |
For the proposed methods, we performed ablation experiments. Specifically, we conducted age estimation experiments on both AgeDB and CACD. We repeated the experiment 10 times with random data partitions and reported the averaged results in Table 8. In Table 8, each referred part corresponds to Eq (3.7).
Dataset | First part | Second part | Third part | C-DCLMP (ours)(ours) |
AgeDB | ✓ | ✓ | 14.49 ± 0.16 | |
✓ | ✓ | 14.51 ± 0.10 | ||
✓ | ✓ | 14.47 ± 0.22 | ||
✓ | ✓ | ✓ | 14.18 ± 0.32 | |
CACD | ✓ | ✓ | 14.07 ± 0.25 | |
✓ | ✓ | 14.08 ± 0.32 | ||
✓ | ✓ | 14.04 ± 0.21 | ||
✓ | ✓ | ✓ | 13.73 ± 0.24 |
In this paper, we proposed a DCLMP, in which both the cross-view discriminative information and the spatial structural information of training data is taken into consideration to enhance subsequent decision making. To pursue closed-form solutions, we remodeled the objective of DCLMP to nonlinear geodesic space and consequently achieved its convex formulation (C-DCLMP). Finally, we evaluated the proposed methods and demonstrated their superiority on various benchmark and real face datasets. In the future, we will consider exploring the latent information of the unlabeled data from the feature and label level, and study how to combine related advanced multi-view learning methods to reduce the computational consumption of the model and further improve the generalization ability of the model in various scenarios.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This work was supported by the National Natural Science Foundation of China under Grant 62176128, the Open Projects Program of State Key Laboratory for Novel Software Technology of Nanjing University under Grant KFKT2022B06, the Fundamental Research Funds for the Central Universities No. NJ2022028, the Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD) fund, as well as the Qing Lan Project of the Jiangsu Province.
[1] |
Şentürk İA, Şentürk E, Üstün I, et al. (2023) High-impact chronic pain: evaluation of risk factors and predictors. Korean J Pain 36: 84-97. https://doi.org/10.3344/kjp.22357 ![]() |
[2] |
Hutton D, Mustafa A, Patil S, et al. (2023) The burden of Chronic Pelvic Pain (CPP): Costs and quality of life of women and men with CPP treated in outpatient referral centers. Plos One 18: e0269828. https://doi.org/10.1371/journal.pone.0269828 ![]() |
[3] |
Nahin RL, Feinberg T, Kapos FP, et al. (2023) Estimated Rates of Incident and Persistent Chronic Pain Among US Adults, 2019-2020. JAMA Netw Open 6: e2313563. https://doi.org/10.1001/jamanetworkopen.2023.13563 ![]() |
[4] |
Collier R (2018) A short history of pain management. Can Med Assoc J 190: E26-E27. https://doi.org/10.1503/cmaj.109-5523 ![]() |
[5] |
Ikeda K, Hazama K, Itano Y, et al. (2020) Development of a novel analgesic for neuropathic pain targeting brain-derived neurotrophic factor. Biochem Biophys Res Commun 531: 390-395. https://doi.org/10.1016/j.bbrc.2020.07.109 ![]() |
[6] |
Bathina S, Das UN (2015) Brain-derived neurotrophic factor and its clinical implications. Arch Med Sci 11: 1164-1178. https://doi.org/10.5114/aoms.2015.56342 ![]() |
[7] |
Sikandar S, Minett MS, Millet Q, et al. (2018) Brain-derived neurotrophic factor derived from sensory neurons plays a critical role in chronic pain. Brain 141: 1028-1039. https://doi.org/10.1093/brain/awy009 ![]() |
[8] |
Sosanya NM, Garza TH, Stacey W, et al. (2019) Involvement of brain-derived neurotrophic factor (BDNF) in chronic intermittent stress-induced enhanced mechanical allodynia in a rat model of burn pain. BMC Neurosci 20: 1-18. https://doi.org/10.1186/s12868-019-0500-1 ![]() |
[9] |
Eaton MJ, Blits B, Ruitenberg MJ, et al. (2002) Amelioration of chronic neuropathic pain after partial nerve injury by adeno-associated viral (AAV) vector-mediated over-expression of BDNF in the rat spinal cord. Gene Ther 9: 1387-1395. https://doi.org/10.1038/sj.gt.3301814 ![]() |
[10] |
Thomas Cheng H (2010) Spinal cord mechanisms of chronic pain and clinical implications. Curr Pain Headache Rep 14: 213-220. https://doi.org/10.1007/s11916-010-0111-0 ![]() |
[11] |
Ding X, Cai J, Li S, et al. (2015) BDNF contributes to the development of neuropathic pain by induction of spinal long-term potentiation via SHP2 associated GluN2B-containing NMDA receptors activation in rats with spinal nerve ligation. Neurobiol Dis 73: 428-451. https://doi.org/10.1016/j.nbd.2014.10.025 ![]() |
[12] |
Zhou LJ, Yang T, Wei X, et al. (2011) Brain-derived neurotrophic factor contributes to spinal long-term potentiation and mechanical hypersensitivity by activation of spinal microglia in rat. Brain Behav Immun 25: 322-334. https://doi.org/10.1016/j.bbi.2010.09.025 ![]() |
[13] |
Zhou W, Xie Z, Li C, et al. (2021) Driving effect of BDNF in the spinal dorsal horn on neuropathic pain. Neurosci Lett 756: 135965. https://doi.org/10.1016/j.neulet.2021.135965 ![]() |
[14] |
Coull JAM, Beggs S, Boudreau D, et al. (2005) BDNF from microglia causes the shift in neuronal anion gradient underlying neuropathic pain. Nature 438: 1017-1021. https://doi.org/10.1038/nature04223 ![]() |
[15] |
Thakkar B, Acevedo EO (2023) BDNF as a biomarker for neuropathic pain: Consideration of mechanisms of action and associated measurement challenges. Brain Behav 13: e2903. https://doi.org/10.1002/brb3.2903 ![]() |
[16] |
Cao T, Matyas JJ, Renn CL, et al. (2020) Function and mechanisms of truncated BDNF receptor TrkB.T1 in neuropathic pain. Cells 9: 1194. https://doi.org/10.3390/cells9051194 ![]() |
[17] |
He T, Wu Z, Zhang X, et al. (2022) A bibliometric analysis of research on the role of BDNF in depression and treatment. Biomolecules 12: 1464. https://doi.org/10.3390/biom12101464 ![]() |
[18] |
Abramo G, D'Angelo CA, Viel F (2011) The field-standardized average impact of national research systems compared to world average: the case of Italy. Scientometrics 88: 599-615. https://doi.org/10.1007/s11192-011-0406-x ![]() |
[19] |
Ahmad R, Azman KF, Yahaya R, et al. (2023) Brain-derived neurotrophic factor (BDNF) in schizophrenia research: a quantitative review and future directions. AIMS Neurosci 10: 5-32. https://doi.org/10.3934/Neuroscience.2023002 ![]() |
[20] |
Fei X, Wang S, Li J, et al. (2022) Bibliometric analysis of research on Alzheimer's disease and non-coding RNAs: opportunities and challenges. Front Aging Neurosci 14: 1037068. https://doi.org/10.3389/fnagi.2022.1037068 ![]() |
[21] | Martínez-Ezquerro JD, Michán L, Rosas-Vargas H (2016) Bibliometric analysis of the BDNF Val66Met polymorphism based on Web of Science, Pubmed, and Scopus databases. Paper presented at: 29th National Congress of Biochemistry, Mexican Society of Biochemistry (SMB) 2012 . https://doi.org/10.7490/f1000research.1113470.1 |
[22] |
Othman Z, Abdul Halim AS, Azman KF, et al. (2022) Profiling the research landscape on cognitive aging: A bibliometric analysis and network visualization. Front Aging Neurosci 14: 876159. https://doi.org/10.3389/fnagi.2022.876159 ![]() |
[23] |
Zhu J, Liu W (2020) A tale of two databases: The use of Web of Science and Scopus in academic papers. Scientometrics 123: 321-335. https://doi.org/10.1007/s11192-020-03387-8 ![]() |
[24] |
Pranckutė R (2021) Web of Science (WoS) and Scopus: The titans of bibliographic information in today's academic world. Publications 9: 12. https://doi.org/10.3390/publications9010012 ![]() |
[25] |
Ferrini F, De Koninck Y (2013) Microglia control neuronal network excitability via BDNF signaling. Neural Plast 2013: 429815. https://doi.org/10.1155/2013/429815 ![]() |
[26] |
Liu Y, Zhou L-J, Wang J, et al. (2017) TNF-α differentially regulates synaptic plasticity in the hippocampus and spinal cord by microglia-dependent mechanisms after peripheral nerve injury. J Neurosci 37: 871-881. https://doi.org/10.1523/JNEUROSCI.2235-16.2016 ![]() |
[27] |
Gomes C, Ferreira R, George J, et al. (2013) Activation of microglial cells triggers a release of brain-derived neurotrophic factor (BDNF) inducing their proliferation in an adenosine A2A receptor-dependent manner: A2A receptor blockade prevents BDNF release and proliferation of microglia. J Neuroinflamm 10: 1-13. https://doi.org/10.1186/1742-2094-10-16 ![]() |
[28] |
Taves S, Berta T, Chen G, et al. (2013) Microglia and spinal cord synaptic plasticity in persistent pain. Neural Plast 2013: 753656. https://doi.org/10.1155/2013/753656 ![]() |
[29] |
Yalcin I, Barthas F, Barrot M (2014) Emotional consequences of neuropathic pain: insight from preclinical studies. Neurosci Biobehav Rev 47: 154-164. https://doi.org/10.1016/j.neubiorev.2014.08.002 ![]() |
[30] |
Taylor AMW, Castonguay A, Taylor AJ, et al. (2015) Microglia disrupt mesolimbic reward circuitry in chronic pain. J Neurosci 35: 8442-8450. https://doi.org/10.1523/JNEUROSCI.4036-14.2015 ![]() |
[31] |
Khan N, Smith MT (2015) Neurotrophins and neuropathic pain: role in pathobiology. Molecules 20: 10657-10688. https://doi.org/10.3390/molecules200610657 ![]() |
[32] |
Nijs J, Meeus M, Versijpt J, et al. (2015) Brain-derived neurotrophic factor as a driving force behind neuroplasticity in neuropathic and central sensitization pain: a new therapeutic target?. Expert Opin Ther Targets 19: 565-576. https://doi.org/10.1517/14728222.2014.994506 ![]() |
[33] |
Zhou L-J, Peng J, Xu Y-N, et al. (2019) Microglia are indispensable for synaptic plasticity in the spinal dorsal horn and chronic pain. Cell Rep 27: 3844-3859. https://doi.org/10.1016/j.celrep.2019.05.087 ![]() |
[34] |
Richner M, Ulrichsen M, Elmegaard SL, et al. (2014) Peripheral nerve injury modulates neurotrophin signaling in the peripheral and central nervous system. Mol Neurobiol 50: 945-970. https://doi.org/10.1007/s12035-014-8706-9 ![]() |
[35] |
Chaplan SR, Bach FW, Pogrel JW, et al. (1994) Quantitative assessment of tactile allodynia in the rat paw. J Neurosci Methods 53: 55-63. https://doi.org/10.1016/0165-0270(94)90144-9 ![]() |
[36] |
Decostered I, Woolf CJ (2000) Spared nerve injury: an animal model of persistent peripheral neuropathic pain. Pain 87: 149-158. https://doi.org/10.1016/S0304-3959(00)00276-1 ![]() |
[37] |
Bennett GJ, Xie Y-K (1988) A peripheral mononeuropathy in rat that produces disorders of pain sensation like those seen in man. Pain 33: 87-107. https://doi.org/10.1016/0304-3959(88)90209-6 ![]() |
[38] | Ribeiro VGC, Lacerda ACR, Santos JM, et al. (2021) Efficacy of whole-body vibration training on brain-derived neurotrophic factor, clinical and functional outcomes, and quality of life in women with fibromyalgia syndrome: a randomized controlled trial. J Healthc Eng 2021: 7593802. https://doi.org/10.1155/2021/7593802 |
[39] |
Sheng J, Liu S, Wang Y, et al. (2017) The link between depression and chronic pain: neural mechanisms in the brain. Neural Plast 2017: 9724371. https://doi.org/10.1155/2017/9724371 ![]() |
[40] |
Zhang S-B, Zhao G-H, Lv T-R, et al. (2023) Bibliometric and visual analysis of microglia-related neuropathic pain from 2000 to 2021. Front Mol Neurosci 16: 1142852. https://doi.org/10.3389/fnmol.2023.1142852 ![]() |
[41] |
Du H, Wu D, Zhong S, et al. (2022) miR-106b-5p attenuates neuropathic pain by regulating the P2x4 receptor in the spinal cord in mice. J Mol Neurosci 72: 1764-1778. https://doi.org/10.1007/s12031-022-02011-z ![]() |
[42] |
Kohno K, Tsuda M (2021) Role of microglia and P2X4 receptors in chronic pain. Pain Rep 6: e864. https://doi.org/10.1097/PR9.0000000000000864 ![]() |
[43] | Biagioli M, Lippman A (2020) Gaming the metrics: Misconduct and manipulation in academic research. London: MIT Press 1-21. |
[44] |
Caon M, Trapp J, Baldock C (2020) Citations are a good way to determine the quality of research. Phys Eng Sci Med 43: 1145-1148. https://doi.org/10.1007/s13246-020-00941-9 ![]() |
[45] |
Chuang K-Y, Ho Y-S (2014) A bibliometric analysis on top-cited articles in pain research. Pain Med 15: 732-744. https://doi.org/10.1111/pme.12308 ![]() |
[46] |
Thelwall M, Sud P (2022) Scopus 1900–2020: Growth in articles, abstracts, countries, fields, and journals. Quant Sci 3: 37-50. https://doi.org/10.1162/qss_a_00177 ![]() |
[47] |
Xiong H-Y, Liu H, Wang X-Q (2021) Top 100 most-cited papers in neuropathic pain from 2000 to 2020: a bibliometric study. Front Neurol 12: 765193. https://doi.org/10.3389/fneur.2021.765193 ![]() |
[48] |
Chou C-Y, Chew SSL, Patel DV, et al. (2009) Publication and citation analysis of the Australian and New Zealand Journal of Ophthalmology and Clinical and Experimental Ophthalmology over a 10-year period: the evolution of an ophthalmology journal. Clin Experiment Ophthalmol 37: 868-873. https://doi.org/10.1111/j.1442-9071.2009.02191.x ![]() |
[49] |
Li X, Zhu W, Li J, et al. (2021) Prevalence and characteristics of chronic Pain in the Chinese community-dwelling elderly: a cross-sectional study. BMC Geriatr 21: 1-10. https://doi.org/10.1186/s12877-021-02432-2 ![]() |
[50] | Surwase G, Saga A, Kadermani BS, et al. (2011) Co-citation analysis: An overview. In: Paper Presented at: Beyond Librarianship: Creativity, Innovation and Discovery, Mumbai (India), 16–17 September 2011 . |
[51] |
Argüelles JC, Argüelles-Prieto R (2019) The impact factor: implications for research policy, editorial rules and scholarly reputation. FEMS Microbiol Lett 366: fnz132. https://doi.org/10.1093/femsle/fnz132 ![]() |
[52] |
Romanelli JP, Gonçalves MCP, de Abreu Pestana LF, et al. (2021) Four challenges when conducting bibliometric reviews and how to deal with them. Environ Sci Pollut Res 28: 60448-60458. https://doi.org/10.1007/s11356-021-16420-x ![]() |
[53] | Gingras Y (2016) Bibliometrics and research evaluation: Uses and abuses. London: MIT Press Pp 1-89. |
[54] |
Zimmermann M (1983) Ethical guidelines for investigations of experimental pain in conscious animals. Pain 16: 109-110. https://doi.org/10.1016/0304-3959(83)90201-4 ![]() |
[55] |
Groth R, Aanonsen L (2002) Spinal brain-derived neurotrophic factor (BDNF) produces hyperalgesia in normal mice while antisense directed against either BDNF or trkB, prevent inflammation-induced hyperalgesia. Pain 100: 171-181. https://doi.org/10.1016/0304-3959(83)90201-4 ![]() |
[56] |
Hargreaves K, Dubner R, Brown F, et al. (1988) A new and sensitive method for measuring thermal nociception in cutaneous hyperalgesia. Pain 32: 77-88. https://doi.org/10.1016/0304-3959(88)90026-7 ![]() |
[57] |
Smith P (2014) BDNF: no gain without pain?. Neurosci 283: 107-123. https://doi.org/10.1016/j.neuroscience.2014.05.044 ![]() |
[58] |
Geng S-J, Liao F-F, Dang W-H, et al. (2010) Contribution of the spinal cord BDNF to the development of neuropathic pain by activation of the NR2B-containing NMDA receptors in rats with spinal nerve ligation. Exp Neurol 222: 256-266. https://doi.org/10.1016/j.expneurol.2010.01.003 ![]() |
[59] |
Kim SH, Chung JM (1992) An experimental model for peripheral neuropathy produced by segmental spinal nerve ligation in the rat. Pain 50: 355-363. https://doi.org/10.1016/0304-3959(92)90041-9 ![]() |
[60] |
Merighi A, Salio C, Ghirri A, et al. (2008) BDNF as a pain modulator. Prog Neurobiol 85: 297-317. https://doi.org/10.1016/j.pneurobio.2008.04.004 ![]() |
[61] |
Pezet S, McMahon SB (2006) Neurotrophins: mediators and modulators of pain. Annu Rev Neurosci 29: 507-538. https://doi.org/10.1146/annurev.neuro.29.051605.112929 ![]() |
[62] |
Tsuda M, Shigemoto-Mogami Y, Koizumi S, et al. (2003) P2X4 receptors induced in spinal microglia gate tactile allodynia after nerve injury. Nature 424: 778-783. https://doi.org/10.1038/nature01786 ![]() |
[63] |
Coull JAM, Boudreau D, Bachand K, et al. (2003) Trans-synaptic shift in anion gradient in spinal lamina I neurons as a mechanism of neuropathic pain. Nature 424: 938-942. https://doi.org/10.1038/nature01868 ![]() |
Dataset | View Represenations | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) | |
MFD | fac | fou | 80.22 ± 0.9 | 80.00 ± 0.2 | 90.64 ± 1.3 | 95.15 ± 0.9 | 96.46 ± 2.4 | 98.11 ± 0.3 | 94.49 ± 1.7 | 98.03 ± 0.3 |
fac | kar | 92.12 ± 0.5 | 90.10 ± 0.8 | 95.39 ± 0.6 | 95.33 ± 0.7 | 96.52 ± 1.2 | 97.06 ± 0.4 | 96.86 ± 0.5 | 97.93 ± 0.6 | |
fac | mor | 78.22 ± 0.8 | 63.22 ± 4.3 | 72.32 ± 2.4 | 95.22 ± 0.9 | 94.23 ± 1.0 | 98.13 ± 0.3 | 90.97 ± 2.5 | 97.63 ± 0.3 | |
fac | pix | 83.02 ± 1.2 | 90.20 ± 0.5 | 94.65 ± 0.5 | 65.60 ± 1.1 | 93.67 ± 2.9 | 97.52 ± 0.4 | 97.45 ± 0.5 | 97.21 ± 0.4 | |
fac | zer | 84.00 ± 0.6 | 71.50 ± 2.2 | 93.79 ± 0.7 | 96.00 ± 0.6 | 97.04 ± 0.6 | 97.03 ± 0.4 | 95.98 ± 0.3 | 97.75 ± 0.4 | |
fou | kar | 90.11 ± 1.0 | 75.42 ± 5.6 | 93.98 ± 0.4 | 89.12 ± 4.3 | 96.90 ± 0.5 | 97.19 ± 0.6 | 97.45 ± 0.4 | 97.45 ± 0.3 | |
fou | mor | 70.22 ± 0.4 | 55.82 ± 4.6 | 60.62 ± 1.6 | 82.30 ± 0.9 | 78.25 ± 0.6 | 83.81 ± 0.7 | 82.09 ± 1.0 | 84.80 ± 0.6 | |
fou | pix | 68.44 ± 0.4 | 76.10 ± 4.7 | 78.24 ± 1.1 | 90.41 ± 3.2 | 76.28 ± 1.3 | 96.11 ± 0.5 | 97.62 ± 0.4 | 97.74 ± 0.3 | |
fou | zer | 74.10 ± 0.9 | 62.80 ± 4.1 | 79.38 ± 1.2 | 79.53 ± 4.5 | 83.16 ± 1.4 | 85.98 ± 0.9 | 85.33 ± 1.1 | 86.56 ± 1.0 | |
kar | mor | 64.09 ± 0.6 | 82.00 ± 1.6 | 72.92 ± 2.7 | 91.95 ± 2.8 | 91.89 ± 0.6 | 97.28 ± 0.5 | 96.83 ± 0.5 | 97.14 ± 0.4 | |
kar | pix | 88.37 ± 0.9 | 88.85 ± 0.8 | 95.07 ± 0.6 | 92.59 ± 2.0 | 95.98 ± 0.3 | 94.68 ± 0.5 | 97.54 ± 0.4 | 97.31 ± 0.5 | |
kar | zer | 90.77 ± 1.0 | 75.97 ± 2.8 | 94.17 ± 0.6 | 88.47 ± 2.9 | 93.57 ± 0.9 | 96.69 ± 0.4 | 96.98 ± 0.4 | 97.42 ± 0.4 | |
mor | pix | 68.66 ± 1.5 | 82.01 ± 2.1 | 67.21 ± 2.3 | 93.04 ± 0.7 | 90.08 ± 1.0 | 96.89 ± 0.4 | 97.20 ± 0.5 | 97.19 ± 0.4 | |
mor | zer | 73.22 ± 0.6 | 50.35 ± 1.8 | 60.95 ± 1.4 | 84.55 ± 0.9 | 80.59 ± 0.9 | 84.19 ± 0.8 | 81.75 ± 1.1 | 84.29 ± 0.7 | |
pix | zer | 82.46 ± 0.6 | 71.16 ± 2.8 | 82.81 ± 1.2 | 91.67 ± 2.1 | 91.81 ± 1.2 | 96.30 ± 0.5 | 97.35 ± 0.5 | 97.30 ± 0.5 | |
AWA | cq | lss | 73.11 ± 2.1 | 62.08 ± 0.3 | 76.19 ± 1.0 | 70.51 ± 1.3 | 77.53 ± 1.7 | 87.80 ± 2.8 | 89.03 ± 1.4 | 89.80 ± 1.2 |
cq | phog | 65.21 ± 1.4 | 73.10 ± 1.2 | 72.42 ± 1.6 | 70.15 ± 0.9 | 74.51 ± 2.1 | 85.58 ± 2.7 | 86.71 ± 2.3 | 86.81 ± 1.2 | |
cq | rgsift | 60.22 ± 1.3 | 61.40 ± 1.7 | 78.04 ± 1.3 | 82.87 ± 2.4 | 82.83 ± 1.4 | 90.99 ± 3.0 | 93.44 ± 0.6 | 94.34 ± 0.8 | |
cq | sift | 74.33 ± 1.3 | 61.28 ± 1.9 | 77.85 ± 1.4 | 83.19 ± 2.1 | 80.05 ± 1.7 | 81.59 ± 5.2 | 87.17 ± 0.8 | 90.68 ± 0.8 | |
cq | surf | 75.86 ± 1.7 | 69.30 ± 2.1 | 79.07 ± 0.8 | 73.55 ± 2.3 | 81.59 ± 1.5 | 93.58 ± 1.1 | 94.36 ± 1.0 | 95.35 ± 0.5 | |
lss | phog | 69.96 ± 1.7 | 59.72 ± 0.2 | 68.12 ± 1.2 | 64.86 ± 2.6 | 71.36 ± 1.4 | 80.48 ± 2.0 | 81.76 ± 1.1 | 81.62 ± 1.1 | |
lss | rgsift | 78.65 ± 0.9 | 63.21 ± 1.3 | 73.64 ± 1.0 | 78.28 ± 2.8 | 77.28 ± 1.4 | 87.38 ± 4.3 | 90.13 ± 0.7 | 89.95 ± 1.0 | |
lss | sift | 73.49 ± 1.0 | 65.72 ± 2.1 | 73.12 ± 1.4 | 66.21 ± 1.6 | 76.69 ± 1.7 | 81.56 ± 2.4 | 84.05 ± 0.9 | 84.07 ± 1.9 | |
lss | surf | 76.30 ± 1.4 | 65.33 ± 1.8 | 74.84 ± 1.6 | 79.06 ± 2.8 | 78.52 ± 1.3 | 89.81 ± 2.5 | 89.75 ± 0.8 | 91.12 ± 0.7 | |
phog | rgsift | 68.18 ± 1.1 | 48.38 ± 1.0 | 69.49 ± 2.3 | 77.37 ± 1.5 | 74.41 ± 1.5 | 82.76 ± 1.1 | 83.57 ± 1.6 | 83.68 ± 1.2 | |
phog | sift | 68.26 ± 1.1 | 70.24 ± 1.1 | 68.97 ± 1.3 | 63.16 ± 1.3 | 72.14 ± 1.5 | 80.50 ± 1.2 | 83.57 ± 1.1 | 83.75 ± 1.5 | |
phog | surf | 64.57 ± 1.4 | 56.94 ± 0.5 | 71.55 ± 1.4 | 75.68 ± 1.9 | 74.43 ± 2.1 | 84.97 ± 2.6 | 88.02 ± 1.8 | 87.34 ± 0.8 | |
rgsift | sift | 71.35 ± 1.3 | 58.56 ± 2.3 | 72.85 ± 1.1 | 75.28 ± 2.5 | 76.69 ± 1.7 | 90.76 ± 2.2 | 93.44 ± 0.4 | 93.79 ± 1.0 | |
rgsift | surf | 75.55 ± 1.3 | 67.22 ± 1.6 | 76.94 ± 2.2 | 84.10 ± 2.4 | 80.46 ± 1.7 | 93.25 ± 1.2 | 92.82 ± 0.8 | 93.66 ± 0.8 | |
sift | surf | 75.33 ± 1.3 | 63.36 ± 1.6 | 74.27 ± 1.2 | 82.14 ± 2.7 | 75.51 ± 1.1 | 90.07 ± 3.4 | 90.67 ± 1.0 | 91.69 ± 1.1 | |
ADNI | AV | FDG | 65.47 ± 1.8 | 73.28 ± 2.1 | 75.28 ± 2.6 | 76.25 ± 2.1 | 76.26 ± 2.5 | 79.59 ± 1.9 | 68.64 ± 3.3 | 80.86 ± 2.1 |
AV | VBM | 71.02 ± 2.4 | 71.02 ± 2.8 | 73.24 ± 3.1 | 63.47 ± 2.1 | 60.67 ± 2.7 | 81.59 ± 2.5 | 78.38 ± 2.5 | 80.70 ± 2.8 | |
FDG | VBM | 61.37 ± 1.2 | 65.28 ± 1.6 | 70.37 ± 2.6 | 64.05 ± 1.6 | 70.95 ± 1.8 | 80.12 ± 2.0 | 74.97 ± 2.9 | 80.21 ± 1.7 | |
USPS | left | right | 62.14 ± 0.6 | 80.11 ± 1.2 | 66.67 ± 0.9 | 63.96 ± 2.0 | 82.89 ± 1.9 | 89.76 ± 0.3 | 96.19 ± 0.7 | 96.03 ± 0.6 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 17.70 ± 0.5 | 17.78 ± 0.5 | 16.10 ± 0.4 | 15.93 ± 0.4 | 15.62 ± 0.5 | 15.48 ± 0.2 | 15.59 ± 0.1 | 15.16 ± 0.4 |
100 | 16.81 ± 0.5 | 17.23 ± 0.6 | 14.74 ± 0.5 | 14.79 ± 0.5 | 14.67 ± 0.4 | 14.57 ± 0.2 | 14.60 ± 0.2 | 14.13 ± 0.2 |
150 | 15.43 ± 0.5 | 16.25 ±0.6 | 13.83 ± 0.5 | 13.49 ± 0.4 | 13.43 ± 0.4 | 13.21 ± 0.2 | 13.48 ± 0.2 | 13.19 ± 0.3 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 16.17 ± 0.5 | 16.27 ± 0.46 | 15.42 ± 0.5 | 15.09 ± 0.5 | 14.78 ± 0.5 | 14.67 ± 0.3 | 14.75 ± 0.2 | 14.52 ± 0.2 |
100 | 15.86 ± 0.5 | 15.79 ± 0.6 | 14.89 ± 0.8 | 14.23 ± 0.4 | 14.09 ± 0.4 | 13.78 ± 0.3 | 14.07 ± 0.3 | 13.68 ± 0.2 |
150 | 15.09 ± 0.5 | 14.81 ± 0.3 | 13.97 ± 0.5 | 13.41 ± 0.6 | 13.34 ± 0.5 | 13.16 ± 0.3 | 13.46 ± 0.3 | 13.15 ± 0.3 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 16.28 ± 0.5 | 16.78 ± 0.4 | 15.79 ± 0.4 | 14.98 ± 0.4 | 14.38 ± 0.4 | 14.28 ± 0.4 | 14.10 ± 0.3 | 13.95 ± 0.3 |
100 | 15.45 ± 0.4 | 16.52 ± 0.5 | 15.04 ± 0.5 | 14.44 ± 0.4 | 13.99 ± 0.4 | 13.98 ± 0.3 | 13.85 ± 0.2 | 13.74 ± 0.2 |
150 | 15.20 ± 0.5 | 15.41 ± 0.5 | 14.79 ± 0.4 | 14.02 ± 0.5 | 13.73 ± 0.5 | 13.79 ± 0.2 | 13.67 ± 0.1 | 13.63 ± 0.3 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 16.07 ± 0.6 | 16.27 ± 0.4 | 15.35 ± 0.4 | 14.21 ± 0.6 | 13.39 ± 0.4 | 13.49 ± 0.3 | 13.52 ± 0.3 | 13.27 ± 0.2 |
100 | 15.69 ± 0.5 | 15.75 ± 0.3 | 14.65 ± 0.5 | 14.17 ± 0.5 | 13.28 ± 0.3 | 13.26 ± 0.3 | 13.24 ± 0.2 | 12.97 ± 0.4 |
150 | 15.22 ± 0.4 | 15.32 ± 0.4 | 14.45 ± 0.3 | 14.01 ± 0.6 | 13.01 ± 0.3 | 12.94 ± 0.3 | 12.91 ± 0.3 | 12.76 ± 0.4 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 14.29 ± 0.5 | 14.39 ± 0.5 | 13.49 ± 0.3 | 13.04 ± 0.5 | 12.26 ± 0.4 | 12.37 ± 0.3 | 11.84 ± 0.3 | 11.65 ± 0.3 |
100 | 13.97 ± 0.5 | 13.87 ± 0.4 | 12.79 ± 0.5 | 12.35 ± 0.3 | 11.96 ± 0.3 | 11.86 ± 0.3 | 11.53 ± 0.2 | 11.13 ± 0.3 |
150 | 13.43 ± 0.5 | 13.56 ± 0.5 | 12.48 ± 0.3 | 12.26 ± 0.4 | 11.66 ± 0.3 | 11.65 ± 0.3 | 11.45 ± 0.2 | 10.98 ± 0.3 |
Dataset | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
AgeDB | 0.10 ± 0.12 | 0.06 ± 0.05 | 0.41 ± 0.03 | 0.18 ± 0.03 | 0.52 ± 0.02 | 0.11 ± 0.10 | 57.74 ± 0.71 | 54.94 ± 0.32 |
CACD | 0.09 ± 0.13 | 0.06 ± 0.10 | 0.38 ± 0.09 | 0.15 ± 0.04 | 0.47 ± 0.03 | 0.07 ± 0.01 | 30.86 ± 0.61 | 31.04 ± 0.68 |
Dataset | First part | Second part | Third part | C-DCLMP (ours)(ours) |
AgeDB | ✓ | ✓ | 14.49 ± 0.16 | |
✓ | ✓ | 14.51 ± 0.10 | ||
✓ | ✓ | 14.47 ± 0.22 | ||
✓ | ✓ | ✓ | 14.18 ± 0.32 | |
CACD | ✓ | ✓ | 14.07 ± 0.25 | |
✓ | ✓ | 14.08 ± 0.32 | ||
✓ | ✓ | 14.04 ± 0.21 | ||
✓ | ✓ | ✓ | 13.73 ± 0.24 |
Dataset | View Represenations | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) | |
MFD | fac | fou | 80.22 ± 0.9 | 80.00 ± 0.2 | 90.64 ± 1.3 | 95.15 ± 0.9 | 96.46 ± 2.4 | 98.11 ± 0.3 | 94.49 ± 1.7 | 98.03 ± 0.3 |
fac | kar | 92.12 ± 0.5 | 90.10 ± 0.8 | 95.39 ± 0.6 | 95.33 ± 0.7 | 96.52 ± 1.2 | 97.06 ± 0.4 | 96.86 ± 0.5 | 97.93 ± 0.6 | |
fac | mor | 78.22 ± 0.8 | 63.22 ± 4.3 | 72.32 ± 2.4 | 95.22 ± 0.9 | 94.23 ± 1.0 | 98.13 ± 0.3 | 90.97 ± 2.5 | 97.63 ± 0.3 | |
fac | pix | 83.02 ± 1.2 | 90.20 ± 0.5 | 94.65 ± 0.5 | 65.60 ± 1.1 | 93.67 ± 2.9 | 97.52 ± 0.4 | 97.45 ± 0.5 | 97.21 ± 0.4 | |
fac | zer | 84.00 ± 0.6 | 71.50 ± 2.2 | 93.79 ± 0.7 | 96.00 ± 0.6 | 97.04 ± 0.6 | 97.03 ± 0.4 | 95.98 ± 0.3 | 97.75 ± 0.4 | |
fou | kar | 90.11 ± 1.0 | 75.42 ± 5.6 | 93.98 ± 0.4 | 89.12 ± 4.3 | 96.90 ± 0.5 | 97.19 ± 0.6 | 97.45 ± 0.4 | 97.45 ± 0.3 | |
fou | mor | 70.22 ± 0.4 | 55.82 ± 4.6 | 60.62 ± 1.6 | 82.30 ± 0.9 | 78.25 ± 0.6 | 83.81 ± 0.7 | 82.09 ± 1.0 | 84.80 ± 0.6 | |
fou | pix | 68.44 ± 0.4 | 76.10 ± 4.7 | 78.24 ± 1.1 | 90.41 ± 3.2 | 76.28 ± 1.3 | 96.11 ± 0.5 | 97.62 ± 0.4 | 97.74 ± 0.3 | |
fou | zer | 74.10 ± 0.9 | 62.80 ± 4.1 | 79.38 ± 1.2 | 79.53 ± 4.5 | 83.16 ± 1.4 | 85.98 ± 0.9 | 85.33 ± 1.1 | 86.56 ± 1.0 | |
kar | mor | 64.09 ± 0.6 | 82.00 ± 1.6 | 72.92 ± 2.7 | 91.95 ± 2.8 | 91.89 ± 0.6 | 97.28 ± 0.5 | 96.83 ± 0.5 | 97.14 ± 0.4 | |
kar | pix | 88.37 ± 0.9 | 88.85 ± 0.8 | 95.07 ± 0.6 | 92.59 ± 2.0 | 95.98 ± 0.3 | 94.68 ± 0.5 | 97.54 ± 0.4 | 97.31 ± 0.5 | |
kar | zer | 90.77 ± 1.0 | 75.97 ± 2.8 | 94.17 ± 0.6 | 88.47 ± 2.9 | 93.57 ± 0.9 | 96.69 ± 0.4 | 96.98 ± 0.4 | 97.42 ± 0.4 | |
mor | pix | 68.66 ± 1.5 | 82.01 ± 2.1 | 67.21 ± 2.3 | 93.04 ± 0.7 | 90.08 ± 1.0 | 96.89 ± 0.4 | 97.20 ± 0.5 | 97.19 ± 0.4 | |
mor | zer | 73.22 ± 0.6 | 50.35 ± 1.8 | 60.95 ± 1.4 | 84.55 ± 0.9 | 80.59 ± 0.9 | 84.19 ± 0.8 | 81.75 ± 1.1 | 84.29 ± 0.7 | |
pix | zer | 82.46 ± 0.6 | 71.16 ± 2.8 | 82.81 ± 1.2 | 91.67 ± 2.1 | 91.81 ± 1.2 | 96.30 ± 0.5 | 97.35 ± 0.5 | 97.30 ± 0.5 | |
AWA | cq | lss | 73.11 ± 2.1 | 62.08 ± 0.3 | 76.19 ± 1.0 | 70.51 ± 1.3 | 77.53 ± 1.7 | 87.80 ± 2.8 | 89.03 ± 1.4 | 89.80 ± 1.2 |
cq | phog | 65.21 ± 1.4 | 73.10 ± 1.2 | 72.42 ± 1.6 | 70.15 ± 0.9 | 74.51 ± 2.1 | 85.58 ± 2.7 | 86.71 ± 2.3 | 86.81 ± 1.2 | |
cq | rgsift | 60.22 ± 1.3 | 61.40 ± 1.7 | 78.04 ± 1.3 | 82.87 ± 2.4 | 82.83 ± 1.4 | 90.99 ± 3.0 | 93.44 ± 0.6 | 94.34 ± 0.8 | |
cq | sift | 74.33 ± 1.3 | 61.28 ± 1.9 | 77.85 ± 1.4 | 83.19 ± 2.1 | 80.05 ± 1.7 | 81.59 ± 5.2 | 87.17 ± 0.8 | 90.68 ± 0.8 | |
cq | surf | 75.86 ± 1.7 | 69.30 ± 2.1 | 79.07 ± 0.8 | 73.55 ± 2.3 | 81.59 ± 1.5 | 93.58 ± 1.1 | 94.36 ± 1.0 | 95.35 ± 0.5 | |
lss | phog | 69.96 ± 1.7 | 59.72 ± 0.2 | 68.12 ± 1.2 | 64.86 ± 2.6 | 71.36 ± 1.4 | 80.48 ± 2.0 | 81.76 ± 1.1 | 81.62 ± 1.1 | |
lss | rgsift | 78.65 ± 0.9 | 63.21 ± 1.3 | 73.64 ± 1.0 | 78.28 ± 2.8 | 77.28 ± 1.4 | 87.38 ± 4.3 | 90.13 ± 0.7 | 89.95 ± 1.0 | |
lss | sift | 73.49 ± 1.0 | 65.72 ± 2.1 | 73.12 ± 1.4 | 66.21 ± 1.6 | 76.69 ± 1.7 | 81.56 ± 2.4 | 84.05 ± 0.9 | 84.07 ± 1.9 | |
lss | surf | 76.30 ± 1.4 | 65.33 ± 1.8 | 74.84 ± 1.6 | 79.06 ± 2.8 | 78.52 ± 1.3 | 89.81 ± 2.5 | 89.75 ± 0.8 | 91.12 ± 0.7 | |
phog | rgsift | 68.18 ± 1.1 | 48.38 ± 1.0 | 69.49 ± 2.3 | 77.37 ± 1.5 | 74.41 ± 1.5 | 82.76 ± 1.1 | 83.57 ± 1.6 | 83.68 ± 1.2 | |
phog | sift | 68.26 ± 1.1 | 70.24 ± 1.1 | 68.97 ± 1.3 | 63.16 ± 1.3 | 72.14 ± 1.5 | 80.50 ± 1.2 | 83.57 ± 1.1 | 83.75 ± 1.5 | |
phog | surf | 64.57 ± 1.4 | 56.94 ± 0.5 | 71.55 ± 1.4 | 75.68 ± 1.9 | 74.43 ± 2.1 | 84.97 ± 2.6 | 88.02 ± 1.8 | 87.34 ± 0.8 | |
rgsift | sift | 71.35 ± 1.3 | 58.56 ± 2.3 | 72.85 ± 1.1 | 75.28 ± 2.5 | 76.69 ± 1.7 | 90.76 ± 2.2 | 93.44 ± 0.4 | 93.79 ± 1.0 | |
rgsift | surf | 75.55 ± 1.3 | 67.22 ± 1.6 | 76.94 ± 2.2 | 84.10 ± 2.4 | 80.46 ± 1.7 | 93.25 ± 1.2 | 92.82 ± 0.8 | 93.66 ± 0.8 | |
sift | surf | 75.33 ± 1.3 | 63.36 ± 1.6 | 74.27 ± 1.2 | 82.14 ± 2.7 | 75.51 ± 1.1 | 90.07 ± 3.4 | 90.67 ± 1.0 | 91.69 ± 1.1 | |
ADNI | AV | FDG | 65.47 ± 1.8 | 73.28 ± 2.1 | 75.28 ± 2.6 | 76.25 ± 2.1 | 76.26 ± 2.5 | 79.59 ± 1.9 | 68.64 ± 3.3 | 80.86 ± 2.1 |
AV | VBM | 71.02 ± 2.4 | 71.02 ± 2.8 | 73.24 ± 3.1 | 63.47 ± 2.1 | 60.67 ± 2.7 | 81.59 ± 2.5 | 78.38 ± 2.5 | 80.70 ± 2.8 | |
FDG | VBM | 61.37 ± 1.2 | 65.28 ± 1.6 | 70.37 ± 2.6 | 64.05 ± 1.6 | 70.95 ± 1.8 | 80.12 ± 2.0 | 74.97 ± 2.9 | 80.21 ± 1.7 | |
USPS | left | right | 62.14 ± 0.6 | 80.11 ± 1.2 | 66.67 ± 0.9 | 63.96 ± 2.0 | 82.89 ± 1.9 | 89.76 ± 0.3 | 96.19 ± 0.7 | 96.03 ± 0.6 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 17.70 ± 0.5 | 17.78 ± 0.5 | 16.10 ± 0.4 | 15.93 ± 0.4 | 15.62 ± 0.5 | 15.48 ± 0.2 | 15.59 ± 0.1 | 15.16 ± 0.4 |
100 | 16.81 ± 0.5 | 17.23 ± 0.6 | 14.74 ± 0.5 | 14.79 ± 0.5 | 14.67 ± 0.4 | 14.57 ± 0.2 | 14.60 ± 0.2 | 14.13 ± 0.2 |
150 | 15.43 ± 0.5 | 16.25 ±0.6 | 13.83 ± 0.5 | 13.49 ± 0.4 | 13.43 ± 0.4 | 13.21 ± 0.2 | 13.48 ± 0.2 | 13.19 ± 0.3 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 16.17 ± 0.5 | 16.27 ± 0.46 | 15.42 ± 0.5 | 15.09 ± 0.5 | 14.78 ± 0.5 | 14.67 ± 0.3 | 14.75 ± 0.2 | 14.52 ± 0.2 |
100 | 15.86 ± 0.5 | 15.79 ± 0.6 | 14.89 ± 0.8 | 14.23 ± 0.4 | 14.09 ± 0.4 | 13.78 ± 0.3 | 14.07 ± 0.3 | 13.68 ± 0.2 |
150 | 15.09 ± 0.5 | 14.81 ± 0.3 | 13.97 ± 0.5 | 13.41 ± 0.6 | 13.34 ± 0.5 | 13.16 ± 0.3 | 13.46 ± 0.3 | 13.15 ± 0.3 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 16.28 ± 0.5 | 16.78 ± 0.4 | 15.79 ± 0.4 | 14.98 ± 0.4 | 14.38 ± 0.4 | 14.28 ± 0.4 | 14.10 ± 0.3 | 13.95 ± 0.3 |
100 | 15.45 ± 0.4 | 16.52 ± 0.5 | 15.04 ± 0.5 | 14.44 ± 0.4 | 13.99 ± 0.4 | 13.98 ± 0.3 | 13.85 ± 0.2 | 13.74 ± 0.2 |
150 | 15.20 ± 0.5 | 15.41 ± 0.5 | 14.79 ± 0.4 | 14.02 ± 0.5 | 13.73 ± 0.5 | 13.79 ± 0.2 | 13.67 ± 0.1 | 13.63 ± 0.3 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 16.07 ± 0.6 | 16.27 ± 0.4 | 15.35 ± 0.4 | 14.21 ± 0.6 | 13.39 ± 0.4 | 13.49 ± 0.3 | 13.52 ± 0.3 | 13.27 ± 0.2 |
100 | 15.69 ± 0.5 | 15.75 ± 0.3 | 14.65 ± 0.5 | 14.17 ± 0.5 | 13.28 ± 0.3 | 13.26 ± 0.3 | 13.24 ± 0.2 | 12.97 ± 0.4 |
150 | 15.22 ± 0.4 | 15.32 ± 0.4 | 14.45 ± 0.3 | 14.01 ± 0.6 | 13.01 ± 0.3 | 12.94 ± 0.3 | 12.91 ± 0.3 | 12.76 ± 0.4 |
training samples | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
50 | 14.29 ± 0.5 | 14.39 ± 0.5 | 13.49 ± 0.3 | 13.04 ± 0.5 | 12.26 ± 0.4 | 12.37 ± 0.3 | 11.84 ± 0.3 | 11.65 ± 0.3 |
100 | 13.97 ± 0.5 | 13.87 ± 0.4 | 12.79 ± 0.5 | 12.35 ± 0.3 | 11.96 ± 0.3 | 11.86 ± 0.3 | 11.53 ± 0.2 | 11.13 ± 0.3 |
150 | 13.43 ± 0.5 | 13.56 ± 0.5 | 12.48 ± 0.3 | 12.26 ± 0.4 | 11.66 ± 0.3 | 11.65 ± 0.3 | 11.45 ± 0.2 | 10.98 ± 0.3 |
Dataset | CCA | DCA | MPECCA | DCCA | CECCA | CDCA | DCLMP (ours) | C-DCLMP (ours) |
AgeDB | 0.10 ± 0.12 | 0.06 ± 0.05 | 0.41 ± 0.03 | 0.18 ± 0.03 | 0.52 ± 0.02 | 0.11 ± 0.10 | 57.74 ± 0.71 | 54.94 ± 0.32 |
CACD | 0.09 ± 0.13 | 0.06 ± 0.10 | 0.38 ± 0.09 | 0.15 ± 0.04 | 0.47 ± 0.03 | 0.07 ± 0.01 | 30.86 ± 0.61 | 31.04 ± 0.68 |
Dataset | First part | Second part | Third part | C-DCLMP (ours)(ours) |
AgeDB | ✓ | ✓ | 14.49 ± 0.16 | |
✓ | ✓ | 14.51 ± 0.10 | ||
✓ | ✓ | 14.47 ± 0.22 | ||
✓ | ✓ | ✓ | 14.18 ± 0.32 | |
CACD | ✓ | ✓ | 14.07 ± 0.25 | |
✓ | ✓ | 14.08 ± 0.32 | ||
✓ | ✓ | 14.04 ± 0.21 | ||
✓ | ✓ | ✓ | 13.73 ± 0.24 |