Research article Special Issues

Brain proteomics links oxidative stress with metabolic and cellular stress response proteins in behavioural alteration of Alzheimer’s disease model rats

  • Alzheimer’s disease (AD) impairs memory and learning related behavioural performances of the affected person. Compared with the controls, memory and learning related behavioural performances of the AD model rats followed by hippocampal proteomics had been observed in the present study. In the eight armed radial maze, altered performance of the AD rats had been observed. Using liquid chromatography coupled tandem mass spectrometry (LC-MS/MS), 822 proteins had been identified with protein threshold at 95.0%, minimum peptide of 2 and peptide threshold at 0.1% FDR. Among them, 329 proteins were differentially expressed with statistical significance (P < 0.05). Among the significantly regulated (P < 0.05) 329 proteins, 289 met the criteria of fold change (LogFC of 1.5) cut off value. Number of proteins linked with AD, oxidative stress (OS) and hypercholesterolemia was 59, 20 and 12, respectively. Number of commonly expressed proteins was 361. The highest amount of proteins differentially expressed in the AD rats were those involved in metabolic processes followed by those linked with OS. Most notable was the perturbed state of the cholesterol metabolizing proteins in the AD group. Current findings suggest that proteins associated with oxidative stress, glucose and cholesterol metabolism and cellular stress response are among the mostly affected proteins in AD subjects. Thus, novel therapeutic approaches targeting these proteins could be strategized to withstand the ever increasing global AD burden.

    Citation: Mohammad Azizur Rahman, Shahdat Hossain, Noorlidah Abdullah, Norhaniza Aminudin. Brain proteomics links oxidative stress with metabolic and cellular stress response proteins in behavioural alteration of Alzheimer’s disease model rats[J]. AIMS Neuroscience, 2019, 6(4): 299-315. doi: 10.3934/Neuroscience.2019.4.299

    Related Papers:

    [1] Xiao Ma, Xuemei Luo . Finger vein recognition method based on ant colony optimization and improved EfficientNetV2. Mathematical Biosciences and Engineering, 2023, 20(6): 11081-11100. doi: 10.3934/mbe.2023490
    [2] Hao Wang, Guangmin Sun, Kun Zheng, Hui Li, Jie Liu, Yu Bai . Privacy protection generalization with adversarial fusion. Mathematical Biosciences and Engineering, 2022, 19(7): 7314-7336. doi: 10.3934/mbe.2022345
    [3] Xiaoguang Liu, Meng Chen, Tie Liang, Cunguang Lou, Hongrui Wang, Xiuling Liu . A lightweight double-channel depthwise separable convolutional neural network for multimodal fusion gait recognition. Mathematical Biosciences and Engineering, 2022, 19(2): 1195-1212. doi: 10.3934/mbe.2022055
    [4] Yafeng Zhao, Xuan Gao, Junfeng Hu, Zhen Chen . Tree species identification based on the fusion of bark and leaves. Mathematical Biosciences and Engineering, 2020, 17(4): 4018-4033. doi: 10.3934/mbe.2020222
    [5] Zhijing Xu, Yang Gao . Research on cross-modal emotion recognition based on multi-layer semantic fusion. Mathematical Biosciences and Engineering, 2024, 21(2): 2488-2514. doi: 10.3934/mbe.2024110
    [6] Yongmei Ren, Xiaohu Wang, Jie Yang . Maritime ship recognition based on convolutional neural network and linear weighted decision fusion for multimodal images. Mathematical Biosciences and Engineering, 2023, 20(10): 18545-18565. doi: 10.3934/mbe.2023823
    [7] Yong Tian, Tian Zhang, Qingchao Zhang, Yong Li, Zhaodong Wang . Feature fusion–based preprocessing for steel plate surface defect recognition. Mathematical Biosciences and Engineering, 2020, 17(5): 5672-5685. doi: 10.3934/mbe.2020305
    [8] Chenkai Chang, Fei Qi, Chang Xu, Yiwei Shen, Qingwu Li . A dual-modal dynamic contour-based method for cervical vascular ultrasound image instance segmentation. Mathematical Biosciences and Engineering, 2024, 21(1): 1038-1057. doi: 10.3934/mbe.2024043
    [9] Cong Lin, Yiquan Huang, Wenling Wang, Siling Feng, Mengxing Huang . Lesion detection of chest X-Ray based on scalable attention residual CNN. Mathematical Biosciences and Engineering, 2023, 20(2): 1730-1749. doi: 10.3934/mbe.2023079
    [10] Hongmei Jin, Ning He, Zhanli Li, Pengcheng Yang . Micro-expression recognition based on multi-scale 3D residual convolutional neural network. Mathematical Biosciences and Engineering, 2024, 21(4): 5007-5031. doi: 10.3934/mbe.2024221
  • Alzheimer’s disease (AD) impairs memory and learning related behavioural performances of the affected person. Compared with the controls, memory and learning related behavioural performances of the AD model rats followed by hippocampal proteomics had been observed in the present study. In the eight armed radial maze, altered performance of the AD rats had been observed. Using liquid chromatography coupled tandem mass spectrometry (LC-MS/MS), 822 proteins had been identified with protein threshold at 95.0%, minimum peptide of 2 and peptide threshold at 0.1% FDR. Among them, 329 proteins were differentially expressed with statistical significance (P < 0.05). Among the significantly regulated (P < 0.05) 329 proteins, 289 met the criteria of fold change (LogFC of 1.5) cut off value. Number of proteins linked with AD, oxidative stress (OS) and hypercholesterolemia was 59, 20 and 12, respectively. Number of commonly expressed proteins was 361. The highest amount of proteins differentially expressed in the AD rats were those involved in metabolic processes followed by those linked with OS. Most notable was the perturbed state of the cholesterol metabolizing proteins in the AD group. Current findings suggest that proteins associated with oxidative stress, glucose and cholesterol metabolism and cellular stress response are among the mostly affected proteins in AD subjects. Thus, novel therapeutic approaches targeting these proteins could be strategized to withstand the ever increasing global AD burden.


    Biometric recognition, due to its convenience and security, has replaced traditional identification methods and found widespread applications in various domains. Palmprint features are easy to collect and exhibit good stability. However, the accuracy of palmprint recognition [1] can be affected when the surface skin of the subject's palm is damaged, leading to incomplete feature information. Palm vein features, concealed beneath the epidermis, require collection under near-infrared light and are immune to theft via photography. Moreover, they enable liveness detection, making them a highly secure biometric feature. Nevertheless, the non-transparency, non-uniformity, and heterogeneity of the skin tissue covering the palm veins result in scattering of near-infrared light during imaging. This phenomenon can lead to unclear palm vein images in certain populations, thereby impacting the performance of palm vein recognition.

    The fusion biometric recognition method of palmprint and palm vein not only leverages the stability of palmprint features but also capitalizes on the high security of palm vein features. This approach enhances the system's anti-counterfeiting capabilities while improving the system's recognition performance. It is a biometric fusion recognition method with high security and high user acceptance. Furthermore, we were inspired by the literature [2] and focused on the inter-modal correlation problem. The idea of joint learning was applied to feature extractions of palmprints and palm vein features.

    At this stage, multimodal biometric recognition has gradually emerged as one of the primary focuses in the field of recognition. In this context, study [3] delves into the extraction of pixel difference vectors in multiple directions, achieving feature-level fusion by calculating differences between each pixel and its linear neighboring pixels in two modalities: palmprint and palm vein. Within literature [4], a deep scattering convolutional network is utilized to extract features from palmprints and palm veins, subsequently employing wavelet-based fusion. Moving on to literature [5], an effective fusion approach is employed, combining iris, face, and fingerprint features at the score level. This fusion technique integrates principal component analysis and local binary patterns. In literature [6], a structured robust and sparse least squares regression method is introduced, adaptively discriminating and recognizing fusion features from finger vein and finger knuckle print. In the realm of literature [7], a deep hash network is enlisted to extract binary templates for palmprint and palm vein features, followed by score-level fusion. For literature [8], key point detection and main line extraction are performed for hand geometry features and palmprint features, with corresponding points of the palmprint image detected through template-based matching for recognition. Finally, literature [9] employs Log-Gabor transform, histogram of oriented gradients (HOG), and local binary pattern (LBP) to extract features from palmprint and iris images, culminating in fractional level fusion.

    At the level of biometric fusion, it can be categorized into two types based on fusion order: Pre-matching and post-matching. This includes sensor-level and feature-level fusion as pre-matching, and score-level, rank-level, and decision-level fusion as post-matching [10]. Feature-level fusion is capable of modeling biometric features from multiple dimensions and perspectives, effectively leveraging multimodal information. This helps mitigate errors and uncertainties introduced by noise, insufficient data, and non-robust features during single-feature extraction processes. Therefore, a palmprint and palm vein feature-level fusion recognition method based on joint learning is proposed. Due to the substantial amount of data involved in feature-level fusion, appropriate block-wise dimensionality reduction is applied to image features. This process aims to minimize the dimensions of the images while preserving the palmprint and palm vein features to the maximum extent. Subsequently, a sparse unsupervised projection algorithm with a "purification matrix" constraint is employed to perform dimensionality reduction and purification on the image features. This minimizes data reconstruction errors, eliminates redundant information in the subspace features, and extracts compact and discriminative feature representations. Following this, the partial least squares (PLS) algorithm is utilized to extract subspace features with high grayscale variation and intra-class correlation from each modality, promoting consistency in intra-class modality representation. The features extracted through joint learning enhance the stability of image features, emphasize the saliency of important information, and improve recognition performance. Finally, a weighted summation is employed to fuse the features extracted from palmprint and palm vein, jointly optimizing the contribution of each modality for classification recognition. The specific process is illustrated in Figure 1.

    Figure 1.  System overall schematic diagram.

    In Figure 1, $ X $, $ Y $, $ {X}^{'} $, and $ {Y}^{'} $ represent the training set and test set image matrices after preprocessing, chunking, and dimensionality reduction of palm lines and palm veins. Subsequently, $ X $ and $ Y $ undergo processing by the projection matrices $ {P}_{X} $ and $ {P}_{Y} $ to obtain the new feature spaces $ {F}_{X} $ and $ {F}_{Y} $. The same projection matrices $ {P}_{X} $ and $ {P}_{Y} $ are then applied to process the test set images, resulting in $ {F}_{{X}^{'}} $ and $ {F}_{{Y}^{'}} $.The obtained feature matrices, namely $ {F}_{X} $, $ {F}_{Y} $, $ {F}_{{X}^{'}} $, and $ {F}_{{Y}^{'}} $, undergo supervised feature extraction using partial least squares, yielding the feature matrices $ {V}_{X} $, $ {V}_{Y} $, $ {V}_{{X}^{'}} $, and $ {V}_{{Y}^{'}} $ for fusion. $ L $ and $ {L}^{'} $ represent the training set and test set images. Additionally, $ L $ and $ {L}^{'} $ correspond to the fused features, while $ {P}^{'} $ and $ {F}^{'} $ in their fusion formulae represent two different modal features, respectively. The major contributions of the paper are:

    i. Introducing a novel approach that addresses challenges in multi-modal palmprint and palm vein feature extraction. The method employs a sparse unsupervised projection algorithm with a "purification matrix" constraint, ensuring consistent intra-modal features in a shared expression space, minimizing data reconstruction errors, and enhancing feature representations.

    ii. In this article, we utilized the partial least squares algorithm to extract high grayscale variance and category correlation subspaces from each modality. This promotes intra-modal representation consistency, improves the exploration of correlations among multi-modal samples, and thereby enhances recognition performance.

    iii. We introduce a weighted sum strategy for dynamically optimizing each modality's contribution to classification recognition. Experimental evaluations on five multimodal databases validate its suitability for high-security identity recognition scenarios.

    In summary, the contributions include a new feature-level fusion method, enhanced feature extraction, optimized inter-modal relationships, and effective fusion of palmprint and palm vein features, resulting in substantial improvements in recognition performance for multi-modal identity recognition scenarios.

    In the realm of multimodal biometric recognition, the fusion of palmprint and palm vein features offers significant advantages. However, most of the existing multi-modal palmprint and palm vein feature extraction methods only extract feature information independently from different modalities, ignoring the importance of the correlation between different modal samples in the class to the improvement of recognition performance.

    Existing biometric fusion recognition methods can be categorized into those based on traditional methods and those based on deep learning methods. In the realm of traditional methods, a study [11] proposed a cross-spectral matching system that extracts palmprint and palm vein features from the near-infrared (NIR) and visible light (RGB) spectral bands. Local binary coding is applied to palmprint features, and the NIR palm vein template is matched with the registered RGB palmprint template, with the fusion of similarity scores to enhance recognition performance. Another study [12] employed a kernel-based approach to extract facial features, Hough transform, and Daugman algorithm for left and right iris features, and Gabor filter banks for features of two thumbprints. The feature vectors are then mapped to the Reproducing Kernel Hilbert Space, followed by dimensionality reduction for feature fusion. In a different approach, a study [13] utilized low and high-frequency wavelet sub-bands to extract local and global information from palmprints and faces. A nearest-neighbor classifier is employed for sub-band recognition, and weighted majority voting is used to fuse the obtained categories. Last, a research effort [14] captured multimodal data from the same region of the hand using a single device. To obtain correlated information, fingerprint and finger vein features are decomposed into shared and private features, aiming to enhance complementarity.

    In recognition methods based on deep learning, a study [15] proposed a wavelet-based fusion strategy for processing palmprint and palm vein images. Subsequently, deep scattering convolutional networks were employed for feature extraction and recognition. Another work [16] utilized a deep Hashing network (DHN) to extract binary templates for palmprint and palm vein authentication. This approach employed a spatial transformer network to overcome rotation and misalignment issues. By fusing features from different spectra, the method leveraged its advantages and improved recognition performance for fusion recognition. Addressing iris, palm vein, and finger vein modalities, a study [17] introduced a hybrid fusion model that captures typical features through a multi-ensemble structure. It utilized the distribution information of scores to assist decision-making, enhancing recognition accuracy and security. Additionally, a research effort [18] proposed a spatial and temporal multimodal fingerprint and finger vein network named FS-STMFPFV-Net, based on fingerprint and finger vein modalities. Independent learning for the two channels was achieved, enhancing resistance to image variations, and feature selection was performed using ReliefFS.

    The traditional methods mentioned above use projection, encoding, and extraction of local global information to extract features, but ignore the importance of maximizing the correlation between different modal samples within a class during feature extraction to make the features more stable and discriminative, and to make the features more consistent in a common expression space for modal fusion recognition performance enhancement. On the other hand, deep learning methods, by increasing the depth of the feature extraction network, enhance the saliency of important features, making them more discriminative. However, this comes at the cost of increased complexity in the network model.

    In this research, we employ a sparse unsupervised projection method constrained by a "purification matrix" to enhance intramodal features, minimizing data reconstruction errors and bolstering discriminative capabilities. Following this, the partial least squares algorithm discerns significant gray-scale variance and category relevance within each modality. A weighted sum is then applied to optimize each modality's contribution, ensuring precise classification results.

    We summarize the paper as follows. Section 3 describes the derivation of the methods mentioned in this article. Section 4 creates five multimodal databases and performs performance experiments on the method. Section 5 summarizes the article.

    In the image preprocessing stage, palmprint and palm vein images undergo denoising, contour extraction, determination of intersections and valley points, delineation of regions, and extraction of regions of interest (ROI) [19]. The ROIs obtained are of size $ 128\; pixel\times 128\; pixel $. To achieve optimal recognition performance within a database containing 200 to 500 individuals, it is necessary to perform non-overlapping block-wise dimensionality reduction on the palmprint and palm vein ROI images.

    In the feature extraction stage, due to the influence of environmental conditions, pose variations, and noise interference during image acquisition, it is necessary to suppress and eliminate various redundant information to the maximum extent in the feature extraction process. This ensures that the intra-class features are expressed with greater distinctiveness and consistency. The partial least squares (PLS) method, as a traditional feature extraction technique, excels in maximizing the correlation between the input feature matrix and identity information labels to extract features relevant to individual identity recognition. However, in the feature dimensionality reduction process, the PLS method does not adequately consider the sparsity of features, resulting in the extraction of features with potential redundancy. This leads to less compact feature representations, making it challenging to accurately differentiate between individuals. Additionally, the PLS method lacks a noise elimination mechanism during the feature extraction process, making it susceptible to noise interference in image data. This can hinder the improvement of intra-modal correlation and subsequently affect recognition performance.

    To enhance the intra-class correlation between different modal samples, we propose a method called sparse unsupervised projection with partial least squares (SUPLS), introducing a sparse unsupervised projection (SUP) method with a "purification matrix". The "purification matrix" serves as a matrix aimed at eliminating noise information in image data while retaining useful information. Each element in the feature space of the "purification matrix" represents the weight or contribution of each feature in the samples. This method is applied to process the dimensionality-reduced image features. Through the "purification matrix", the dimensionality-reduced image features maintain sparsity while eliminating noise, preserving compact and distinctive feature representations, ensuring that intra-class features are more consistent in a shared expression space. Assuming the biological feature matrix is denoted as X and the "purification matrix" as F, the purified biological feature matrix XF is obtained through matrix multiplication. When selecting F, it is subjected to sparse constraints to ensure that the weights of noise information approach zero, thereby enhancing the saliency of important features.

    The generation of the "purification matrix" is achieved by introducing constraints in the SUP method. Constraints are added during the projection process to minimize data reconstruction errors, eliminate noise, and ensure the consistency of intra-class feature representation.

    The SUP method can maintain the sparsity of the output image features and extract compact and distinctive feature representations. Through sparse projection, it effectively eliminates noise interference in the data, improving the robustness and stability of features. Under the constraints of the SUP method, the PLS method is employed to extract supervised, dimensionality-reduced, and purified features. This fully utilizes individual identity information, addressing noise and redundant information interference. The method maximizes the correlation between the input feature matrix and identity information labels, thereby extracting features with discriminative and correlated characteristics. The specific derivation of the method is as follows:

    Based on the unsupervised feature extraction method, we propose an optimization by introducing a weight allocation method in the sample space to capture the similarity between sample points. The weight allocation of sample points in the original sample space is represented as follows:

    $ {\sum }_{i, j = 1}^{n}{‖{y}_{i}-{y}_{j}‖}_{2}^{2}{s}_{ij} = 2Tr\left(YL{Y}^{T}\right) $ (1)

    where $ X = [{x}_{1}, {x}_{2}, ..., {x}_{n}]\in {R}^{d\times n} $ represents the sample matrix, d represents the number of features, and n represents the number of samples. $ Y = {P}^{T}X $ and $ {y}_{i} = {P}^{T}{x}_{i} $ are the sample matrix and the ith sample after dimensionality reduction, respectively. $ Tr $ denotes the trace operation of the matrix, $ P\in {R}^{d\times {d}^{'}}\left({d}^{'} < d\right) $ is the projection matrix, $ {d}^{'} $ denotes the subspace dimension, and $ {s}_{ij} $ denotes the similarity between $ {x}_{i} $ and $ {x}_{j} $. Building upon this, a "purification matrix" $ F\in {R}^{n\times {d}^{'}} $ is introduced to eliminate noise in the data while preserving useful information. Here, $ \lambda $ represents the regularization parameter, and a higher value of $ \lambda $ indicates greater similarity between F and the projection matrix P, resulting in less noise removal. $ {S}_{t} = {X}^{T}HX $ represents the global scatter matrix, while $ {‖P‖}_{\mathrm{2, 0}} = k $ denotes the number of non-zero rows in the projection matrix, which is equal to k. The representation is as follows:

    $ \underset{{P}^{T}P = I, F}{min}\frac{1}{2}{\sum }_{i, j = 1}^{n}{‖{y}_{i}-{y}_{j}‖}_{2}^{2}{s}_{ij}+\lambda {‖{X}^{T}P-F‖}_{F}^{2} $ (2)
    $ \underset{{P}^{T}P = I, {‖P‖}_{\mathrm{2, 0}} = k, F}{min}\frac{Tr\left({F}^{T}LF\right)+\lambda {‖{X}^{T}P-F‖}_{F}^{2}}{Tr\left({P}^{T}{S}_{t}P\right)} $ (3)
    $ \underset{P, F}{min}\frac{Tr\left({F}^{T}LF\right)+\lambda Tr\left({P}^{T}X{X}^{T}P-2F{P}^{T}X+F{F}^{T}\right)}{Tr\left({P}^{T}{S}_{t}P\right)}\\ s.t.{P}^{T}P = I, {‖P‖}_{\mathrm{2, 0}} = k $ (4)

    Upon performing dimensionality reduction and purification of the ROI image under the sparse unsupervised constraint with the "purification matrix", we obtain the corresponding subspace sample features F and the corresponding projection matrix P. The test image is then sparsely projected using the projection matrix P to obtain its subspace features. The subspace features of the training and test set images, which have undergone dimensionality reduction and purification under the sparse unsupervised constraint, are taken as the dependent variable matrix Y, while the image-category information is considered the independent variable matrix X. The relationship between the two is established to extract more discriminative features. The specific derivation is as follows:

    In the context of biometric feature extraction, the independent variable matrix X consists of p image samples, and the dependent variable matrix Y consists of the corresponding class labels of the p image samples, denoted as $ X = {\left({x}_{1}\text{, }{\text{x}}_{2}, \cdots, {x}_{p}\right)}^{T}andY = {\left({y}_{1}, {y}_{2}, \cdots, {y}_{p}\right)}^{T} $, respectively. Here, $ {x}_{t}\left(t = \mathrm{1, 2}, \cdots, p\right) $ represents the column vector formed by the t-th image. We extract the first pair of components from the variables X and Y, which are a linear combination of X and Y:

    $ {T}_{1}\text{ = }{\text{w}}_{11}{X}_{1}+{w}_{12}{X}_{2}+\cdots +{w}_{1p}{X}_{p} = {w}_{1}^{T}X $ (5)
    $ {U}_{1} = {v}_{11}{Y}_{1}+{v}_{12}{Y}_{2}+\cdots +{v}_{1q}{Y}_{q} = {v}_{1}^{T}Y $ (6)

    where $ {T}_{1} $ and $ {U}_{1} $ carry as much information as possible about the variantion in their respective matrices, and both need to satisfy the following conditions in order to maximize their correlation.

    $ MAX\left(cov\left({T}_{1}, {U}_{1}\right)\right) = \sqrt{var\left({T}_{1}\right)var\left({U}_{1}\right)}r\left({T}_{1}, {U}_{1}\right) $ (7)

    In the above equation, $ cov\left(.\right) $ represents the covariance operator, $ var\left(.\right) $ represents the variance operator, and $ r\left(.\right) $ represents the correlation coefficient operator. Then, the score vectors for the first pair of components are calculated based on the standardized observation matrices X and Y, and linear regression is performed on the score vectors. The relationship model between X and Y is represented as follows:

    $ Y = {\beta }_{1}{X}_{1}+{\beta }_{2}{X}_{2}+\cdots +{\beta }_{m}{X}_{m}+{Y}_{m} $ (8)

    The above set of feature vectors $ {\beta }_{1}, {\beta }_{2}, \cdots, {\beta }_{m} $ represents a set of coordinate coefficients. The ROI image undergoes projection onto a set of vectors, resulting in coordinates that signify its position in the subspace. These coordinates form the basis for subsequent classification.

    After feature extraction by the SUPLS method, the palmprint feature vector S and the palm vein feature vector Z in the multimodal image database can be obtained. In order to eliminate the adverse effects of the numerical imbalance between the two sets of features on the feature-level fusion, S and Z are standardized, respectively: Let $ {S}^{'} = S/‖S‖ $, $ {Z}^{'} = Z/‖Z‖ $, S and Z are transformed into unit vectors, $ {S}^{'} $ and $ {Z}^{'} $ are the standardized two modal features, and the double vertical lines represent the 2-norm. The combined feature vectors are as follows:

    $ L = \left[\omega {Z}^{'}\left(1-\omega \right){S}^{'}\right] $ (9)

    where $ \omega $ represents the weight, indicating the contribution of different modalities in the fusion process. The optimal value can be determined through experiments in the multimodal image database. After obtaining the fused feature, matching is performed by calculating the Euclidean distance between the p-th feature vector $ {L}_{p} $ and the q-th feature vector $ {L}_{q} $ extracted from the database. This distance can be denoted as:

    $ Distanc{e}_{pq} = \left|{L}_{p}-{L}_{q}\right| $ (10)

    If the following condition is met:

    $ Distance < t $ (11)

    If it is determined that this pair of fused features originates from the same individual, it is accepted; otherwise, it is rejected. The parameter t represents a pre-defined threshold.

    After conducting a search, it was discovered that there are limited publicly available databases capable of accommodating multiple hand-based features from the same individual simultaneously, such as palmprint, palm vein, fingerprint, knuckle pattern, finger vein, hand shape, etc. Therefore, we construct five multimodal databases comprising hand-based features, utilizing four unimodal palm vein databases and two unimodal palmprint databases. Table 1 provides the specific details of the utilized unimodal databases.

    Table 1.  Based on the single-modal hand database description.
    Databases Traits Subject Sample Total
    CASIA-P Palmprint 200 6 1200
    Tongji-P Palmprint 600 20 12000
    Tongji-V [20] Palm-vein 600 20 12000
    CASIA-V [21] Palm-vein 200 6 1200
    PolyU-NIR [22] Palm-vein 250 6 1500
    Self-built Palm-vein 530 10 5300

     | Show Table
    DownLoad: CSV

    The multimodal database CASIA-PV comprises two publicly available unimodal databases, derived from the CASIA multispectral palmprint database, for the extraction of palmprint and palm vein features from images captured at 460nm and 850nm wavelengths, respectively. These databases are then combined to form the new database, CASIA-PV. CASIA-P and CASIA-V contain palmprint and palm vein features obtained from the left and right hands of 100 individuals. Each individual has 6 images, and the features from the left and right hands are treated as separate individuals, resulting in a total of 1200 images for CASIA-P and CASIA-V. The region of interest (ROI) size is $ 128\; pixel\times 128\; pixel $.

    The multimodal database Tongji-PV consists of two publicly available unimodal databases: Tongji-P, which employs contact-based palmprint image acquisition and includes 12,000 palm vein images from 600 individuals, and Tongji-V, which adopts non-contact palm vein image acquisition with a light source wavelength of 940 nm and includes 12,000 palm vein images from 600 individuals aged between 20 and 50. The ROI region size for both databases is $ 128\; pixel\times 128\; pixel $.

    The multimodal database NIR-CAP consists of the PolyU-NIR (Hong Kong Polytechnic University Multispectral Palm Vein Database) and the CASIA-P (CASIA Palm Vein Database). The PolyU-NIR database acquires palm vein images under near-infrared (NIR) illumination, using a CCD camera and a high-power halogen light source as the contact-based acquisition device. The NIR-CAP database contains two databases, each with 200 individuals, and 6 samples per individual. The ROI region size is $ 128\; pixel\times 128\; pixel $.

    The multimodal database Self-built-CAP consists of a self-built database (Self-built) and a CASIA-P palmprint database (CASIA-P). The Self-built database was used to acquire non-contact palm pulse images from the left hand of 530 individuals aged between 20 and 50 years, with 10 images per individual, resulting in a total of 5300 images. The Self-built-CAP database contains two databases, each with 200 individuals, and 6 samples per individual. The ROI region size is $ 128\; pixel\times 128\; pixel $.

    The multimodal database NIR-TP contains the Hong Kong Polytechnic University multispectral palmprint database (PolyU-NIR) and the Tongji University palmprint database (Tongji-P). It consists of a total of 250 individuals from each of the two database, with 6 samples per individual and an ROI region size of $ 128\; pixel\times 128\; pixel $. Table 2 presents the details of the used multimodal database.

    Table 2.  Description of the multimodal hand-based database.
    Databases Subject Sample Total Image size
    CASIA-PV 200 6 2400 128 × 128
    Tongji-PV 500 6 6000 128 × 128
    NIR-CAP 200 6 2400 128 × 128
    Self-built-CAP 200 6 2400 128 × 128
    NIR-TP 250 6 3000 128 × 128

     | Show Table
    DownLoad: CSV

    To evaluate the accuracy of the proposed hand feature fusion recognition method, the training and test subsets are selected from each of the six multimodal databases, with approximately 50% of individuals randomly chosen from each database. The individuals in the training and test subsets are disjointed. Feature extraction and feature fusion are performed simultaneously on the test subset, followed by classification and matching with the training subset.

    Recognition performance was evaluated using metrics such as the false rejection rate (FRR), false accept rate (FAR), correct recognition rate (CRR) [23], equal error rate (EER), and receiver operating characteristic (ROC) curves [24]. CRR is defined as:

    $ CRR = \frac{Number\;of\;experiments\;for\;correct\;identification\;of\;results}{Total\;number\;of\;identification\;experiments}\times 100\% $

    EER is a comprehensive indicator of the performance of an identification system. In general, a smaller EER indicates superior performance in the identification system. The ROC curve visually depicts the variations in FAR and FRR as the discrimination threshold changes.

    Figure 2.  ROI samples of single-modal database.

    Before joint feature selection and extraction of sparse unsupervised projections, the image is first chunked to reduce the dimensionality, and we ensure that the major features are highlighted while reducing the number of image features. The sample dimension d1 after sparse processing and subspace denoising, the number of features selected k1, and the number of feature primes k generated after the image has been processed by partial least squares are adjusted, provided that the size of the chunks is in the range of 2 × 2, 4 × 4, and 8 × 8.

    Based on findings from literature [25], it is recommended to choose the sample dimension (d1) and the number of features (k1) with the same value. Experimental results indicate that maintaining these values equal maximizes the positive impact on the overall procedure. In experiments, it is ensured that both values remain consistent and do not surpass the original sample dimension of the image. Similarly, from the literature [23], it can be seen that for chunking and dimensionality reduction in an image of size 128 × 128, when the chunking is 2 × 2, the chunking is too small, which will lead to a large amount of redundant information in the image features. This significantly increases the recognition time and incomplete dimensionality reduction. When the chunking is 8 × 8, the main texture information in the image features cannot be fully highlighted because the chunking is too large, leading to a lower recognition accuracy. Therefore, the 4 × 4 chunking standard was used for the experiment. The following parametric experiments were carried out using a Self-built database.

    As the number of master elements (k) increases, the recognition rate gradually improves. The dimensionality reduction technique uses a 4 × 4 chunking standard, resulting in image features of size 32 × 32 after reduction. The number of sample dimensions (d1) and the number of selected features (k1) need to be controlled within the image sample dimensions, with 1000 being close to the upper limit. Table 3 shows that within this range, when the number of master elements reaches 300, the EER is 0.0028%, and the CRR is 100%. However, as the number of elements reaches 1000, further increases will lead to an increase in the number of features and a decrease in program efficiency.

    Table 3.  EER and CRR of different principal component number, sample dimension and feature number (Self-built database).
    Principal element (k) Subspace sample dimension(d1)/ Number of feature selection(k1)
    400 500 600 700 800 900 1000
    50 EER (%) 6.2743 6.2083 6.4043 6.6044 6.2124 6.4740 6.4043
    CRR (%) 97.45 97.62 96.76 97.54 98.45 96.81 96.55
    100 EER (%) 6.4043 6.2273 6.3418 6.4753 6.1839 6.2870 6.0895
    CRR (%) 97.65 96.66 97.25 97.65 96.99 98.86 96.6912
    150 EER (%) 3.3378 3.0536 2.7010 2.6702 2.7333 2.5918 2.5367
    CRR (%) 97.21 97.66 98.13 97.44 97.34 97.95 97.73
    200 EER (%) 0.8011 0.7907 0.5337 0.4680 0.4789 0.4090 0.4052
    CRR (%) 99.25 99.41 99.41 99.48 99.44 99.69 99.72
    250 EER (%) 0.2588 0.1366 0.1333 0.1334 0.1250 0.0787 0.0765
    CRR (%) 99.84 99.91 99.92 99.97 99.97 99.97 99.97
    300 EER (%) 0.0725 0.0444 0.0191 0.0129 0.0056 0.0045 0.0028
    CRR (%) 99.97 99.98 99.99 99.99 99.99 99.99 100.00

     | Show Table
    DownLoad: CSV

    For the proposed method SUPLS in this paper, feature ablation experiments and module ablation experiments were conducted to verify the performance from PLS-based feature-level fusion recognition and SUPLS-based unimodal recognition, respectively, in which the EER display order was palm vein/palmprint in the unimodal experiments of SUPLS. The experimental results are shown in Tables 4 and 5 below.

    Table 4.  Feature-level fusion recognition results based on PLS (palm vein/palmprint).
    Databases CASIA-PV Tongji-PV NIR-CAP Self-built-CAP NIR-TP
    EER (%) 0.8870/0.6667 0.8623/0.9447 0.5070/0.6667 0.4179/0.6667 0.8664/0.2810

     | Show Table
    DownLoad: CSV
    Table 5.  Single mode recognition results based on SUPLS.
    Databases CASIA-PV Tongji-PV NIR-CAP Self-built-CAP NIR-TP
    EER (%) 0.2633/0.1123 0.7585/0.6667 0.1000/0.2633 0.5600/0.2633 0.1742/0.2400

     | Show Table
    DownLoad: CSV

    To evaluate the performance of the proposed SUPLS method, we test it on a multimodal database and compares its performance with several classical methods commonly used in palmprint and palm vein recognition, including DCT_fusion [26], Pca+Lpp, DBM [27], 2DLDA [28], PLS [29], and JPCDA [30]. For the aforementioned methods, the palmprint and palm vein features from the multimodal database were extracted independently. The modal features were then normalized, and a weight-based fusion method was applied to obtain new fused features. The recognition performance was evaluated, and the experimental results are presented in Table 6 below. The ROC curves representing the recognition performance of the previously mentioned methods in the five multimodal fusion galleries are presented in Figures 3 to 7 below. In the four multimodal databases, our method improves the EER by 0.1494%, 0.1511%, 0.0195%, 0.0132%, and 0.0029%, respectively, compared to the other methods, which are more effective (JPCDA). The performance compared to the remaining other methods can be visualized from the ROC graphs in Figures 3 to 7, and the performance effect of our method remains stable.

    Table 6.  Comparison of equal err rate of multiple methods.
    DCT_fusion Pca+Lpp DBM 2DLDA PLS JPCDA SUPLS
    CASIA-PV 34.3667 28.3417 25.6738 30.6333 0.6598 0.1667 0.0173
    Tongji-PV 0.6667 1.2068 0.5384 1.0133 0.4728 0.1703 0.0192
    NIR-CAP 25.5000 12.4865 2.2654 26.3000 0.4912 0.0254 0.0059
    Self-built-CAP 25.2000 16.1667 7.6318 8.3057 0.4144 0.0142 0.0010
    NIR-TP 1.5581 8.0669 0.2709 2.5765 0.3200 0.0037 0.0008

     | Show Table
    DownLoad: CSV
    Figure 3.  ROC curve of CASIA-PV database.
    Figure 4.  ROC curve of Tongji-PV database.
    Figure 5.  ROC curve of NIR-CAP database.
    Figure 6.  ROC curve of Self-built-CAP database.
    Figure 7.  ROC curve of NIR-TP database.

    In this paper, we propose a joint learning-based feature-level fusion recognition method for palmprints and palm veins. This method initially employs a sparse unsupervised projection algorithm with a "purification matrix" constraint to process palmprint and palm vein region-of-interest images. Subsequently, the use of partial least squares algorithm extracts subspaces with high grayscale variance and high category correlation from each modality, promoting the consistency of intra-modal representations. Finally, a weighted sum is applied to fuse palmprint and palm vein features, dynamically optimizing the contribution of each modality for classification recognition. Experimental evaluations conducted on five multimodal databases, composed of six unimodal databases, including the Chinese Academy of Sciences multispectral palmprint and palm vein databases, yielded EER of 0.0173%, 0.0192%, 0.0059%, 0.0010%, and 0.0008%. Both the stability of palmprint features and the high security of palm pulse features are utilised to increase the anti-counterfeiting function of the system. It can be widely used in authentication and access control of confidential information in artificial intelligence restaurants, unmanned hotels, smart banks, smart medical care, smart communities, traffic security checks, and other fields.

    The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research was funded by the National Natural Science Foundation of China (61991413), the China Postdoctoral Science Foundation (2019M651142), the Natural Science Foundation of Liaoning Province(2021-KF-12-07), and the Natural Science Foundation of Liaoning Province (2023-MS-322).

    The authors declare that there are no conflicts of interest.


    Acknowledgments



    The authors gratefully thank University of Malaya and the Ministry of Higher Education, Malaysia for the HIR-MOHE Research Grant F000002-21001 funding and Mohammad Azizur Rahman is grateful for the grant-in-aid provided by Jahangirnagar University and the University Grants Commission of Bangladesh.

    Conflict of interest



    The author reports no conflicts of interest and has received no payment in preparation of this manuscript.

    [1] Reiman EM (2014) Alzheimer's disease and other dementias: advances in 2013. Lancet Neurol 13: 3–5. doi: 10.1016/S1474-4422(13)70257-6
    [2] Liao L, Cheng D, Wang J, et al. (2004) Proteomic characterization of postmortem amyloid plaques isolated by laser capture microdissection. J Biol Chem 279: 37061–37068. doi: 10.1074/jbc.M403672200
    [3] Wang Q, Wu J, Rowan MJ, et al. (2005) β‐amyloid inhibition of long‐term potentiation is mediated via tumor necrosis factor. Eur J Neurosci 22: 2827–2832. doi: 10.1111/j.1460-9568.2005.04457.x
    [4] Lecanu L, Papadopoulos V (2013) Modeling Alzheimer's disease with non-transgenic rat models. Alzheimers Res Ther 5: 17.
    [5] Sparks DL, Scheff SW, Hunsaker JC, et al. (1994) Induction of Alzheimer-like β-amyloid immunoreactivity in the brains of rabbits with dietary cholesterol. Exp Neurol 126: 88–94. doi: 10.1006/exnr.1994.1044
    [6] Refolo LM, Malester B, LaFrancois J, et al. (2000) Hypercholesterolemia accelerates the Alzheimer's amyloid pathology in a transgenic mouse model. Neurobiol Dis 7: 321–331. doi: 10.1006/nbdi.2000.0304
    [7] Anstey KJ, Lipnicki DM, Low LF (2008) Cholesterol as a risk factor for dementia and cognitive decline: A systematic review of prospective studies with meta-analysis. Am J Geriatr Psychiatry 16: 343–354. doi: 10.1097/01.JGP.0000310778.20870.ae
    [8] Ansari MA, Scheff SW (2010) Oxidative stress in the progression of Alzheimer disease in the frontal cortex. J Neuropathol Exp Neurol 69: 155–167. doi: 10.1097/NEN.0b013e3181cb5af4
    [9] McLellan ME, Kajdasz ST, Hyman BT, et al. (2003) In vivo imaging of reactive oxygen species specifically associated with thioflavine S-positive amyloid plaques by multiphoton microscopy. J Neurosci 23: 2212–2217. doi: 10.1523/JNEUROSCI.23-06-02212.2003
    [10] Li F, Calingasan NY, Yu F, et al. (2004) Increased plaque burden in brains of APP mutant MnSOD heterozygous knockout mice. J Neurochem 89: 1308–1312. doi: 10.1111/j.1471-4159.2004.02455.x
    [11] Dumont M, Wille E, Stack C, et al. (2009) Reduction of oxidative stress, amyloid deposition, and memory deficit by manganese superoxide dismutase overexpression in a transgenic mouse model of Alzheimer's disease. FASEB J 23: 2459–2466. doi: 10.1096/fj.09-132928
    [12] Murakami K, Murata N, Noda Y, et al. (2011) SOD1 (copper/zinc superoxide dismutase) deficiency drives amyloid β protein oligomerization and memory loss in mouse model of Alzheimer disease. J Biol Chem 286: 44557–44568. doi: 10.1074/jbc.M111.279208
    [13] Dröge W (2002) Free radicals in the physiological control of cell function. Physiol Rev 82: 47–95. doi: 10.1152/physrev.00018.2001
    [14] Tramutola A, Lanzillotta C, Perluigi M, et al. (2017) Butterfield, Oxidative stress, protein modification and Alzheimer disease. Brain Res Bull 133: 88–99. doi: 10.1016/j.brainresbull.2016.06.005
    [15] Mamun AA, Hashimoto M, Katakura M, et al. (2014) neuroprotective effect of madecassoside evaluated using amyloid Β 1-42 -mediated in vitro and in vivo Alzheimer's disease models. Intl J Indigenous Med Plants 47: 1669–1682.
    [16] Olton DS, Samuelson RJ (1976) Remembrance of places passed: spatial memory in rats. J Exp Psychol Anim Behav Process 2: 97–116. doi: 10.1037/0097-7403.2.2.97
    [17] Jarrard LE, Okaichi H, Steward O, et al. (1984) On the role of hippocampal connections in the performance of place and cue tasks: comparisons with damage to hippocampus. Behav Neurosci 98: 946–954. doi: 10.1037/0735-7044.98.6.946
    [18] Shevchenko G, Sjödin MO, Malmström D, et al. (2010) Cloud-point extraction and delipidation of porcine brain proteins in combination with bottom-up mass spectrometry approaches for proteome analysis. J Proteome Res 9: 3903–3911. doi: 10.1021/pr100116k
    [19] Stepanichev MY, Zdobnova IM, Zarubenko II, et al. (2007) Studies of the effects of central administration of β-amyloid peptide (25–35): pathomorphological changes in the hippocampus and impairment of spatial memory. Neurosci Behav Physiol 36: 101–106.
    [20] Holscher C, Gengler S, Gault VA, et al. (2007) Soluble beta-amyloid[25-35] reversibly impairs hippocampal synaptic plasticity and spatial learning. Eur J Pharmacol 561: 85–90. doi: 10.1016/j.ejphar.2007.01.040
    [21] Parihar MS, Brewer GJ (2007) Mitoenergetic failure in Alzheimer disease. Am J Physiol Cell Physiol 292: C8–C23. doi: 10.1152/ajpcell.00232.2006
    [22] Butterfield DA, Hardas SS, Lange ML (2010) Oxidatively modified glyceraldehyde-3-phosphate dehydrogenase (GAPDH) and Alzheimer's disease: many pathways to neurodegeneration. J Alzheimers Dis 20: 369–393. doi: 10.3233/JAD-2010-1375
    [23] Minjarez B, Valero Rustarazo ML, Sanchez del Pino MM, et al. (2013) Identification of polypeptides in neurofibrillary tangles and total homogenates of brains with Alzheimer's disease by tandem mass spectrometry. J Alzheimers Dis 34: 239–262. doi: 10.3233/JAD-121480
    [24] Musunuri S, Wetterhall M, Ingelsson M, et al. (2014) Quantification of the brain proteome in Alzheimer's disease using multiplexed mass spectrometry. J Proteome Res 13: 2056–2068. doi: 10.1021/pr401202d
    [25] Butterfield DA, Perluigi M, Reed T, et al. (2012) Redox proteomics in selected neurodegenerative disorders: from its infancy to future applications. Antioxid Redox Signal 17: 1610–1655. doi: 10.1089/ars.2011.4109
    [26] Poon HF, Calabrese V, Calvani M, et al. (2006) Proteomics analyses of specific protein oxidation and protein expression in aged rat brain and its modulation by L-acetylcarnitine: insights into the mechanisms of action of this proposed therapeutic agent for CNS disorders associated with oxidative stress. Antioxid Redox Signal 8: 381–394. doi: 10.1089/ars.2006.8.381
    [27] Díez‐Vives C, Gay M, García‐Matas S, et al. (2009) Proteomic study of neuron and astrocyte cultures from senescence‐accelerated mouse SAMP8 reveals degenerative changes. J Neurochem 111: 945–955. doi: 10.1111/j.1471-4159.2009.06374.x
    [28] Blair LJ, Zhang B, Dickey CA (2013) Potential synergy between tau aggregation inhibitors and tau chaperone modulators. Alzheimers Res Ther 5: 41–50. doi: 10.1186/alzrt207
    [29] Yao J, Irwin RW, Zhao L, et al. (2009) Mitochondrial bioenergetic deficit precedes Alzheimer's pathology in female mouse model of Alzheimer's disease. PNAS 106: 14670–14675. doi: 10.1073/pnas.0903563106
    [30] Cui Y, Huang M, He Y, et al. (2011) Genetic ablation of apolipoprotein A-IV accelerates Alzheimer's disease pathogenesis in a mouse model. Am J Pathol 178: 1298–1308. doi: 10.1016/j.ajpath.2010.11.057
    [31] Keeney JTR, Swomley AM, Förster S, et al. (2013) Apolipoprotein A‐I: Insights from redox proteomics for its role in neurodegeneration. Proteomics Clin Appl 7: 109–122. doi: 10.1002/prca.201200087
    [32] Liu HC, Hu CJ, Chang JG, et al. (2006) Proteomic identification of lower apolipoprotein AI in Alzheimer's disease. Dement Geriatr Cogn Disord 21: 155–161. doi: 10.1159/000090676
    [33] Corder EH, Saunders AM, Strittmatter WJ, et al. (1993) Gene dose of apolipoprotein E type 4 allele and the risk of Alzheimer's disease in late onset families. Science 261: 921–923. doi: 10.1126/science.8346443
    [34] Sizova D, Charbaut E, Delalande F, et al. (2007) Proteomic analysis of brain tissue from an Alzheimer's disease mouse model by two-dimensional difference gel electrophoresis. Neurobiol Aging 28: 357–370. doi: 10.1016/j.neurobiolaging.2006.01.011
    [35] Ellis RJ, Olichney JM, Thal LJ, et al. (1996) Cerebral amyloid angiopathy in the brains of patients with Alzheimer's disease: The CERAD experience, part XV. Neurology 46: 1592–1596. doi: 10.1212/WNL.46.6.1592
    [36] Kim J, Basak JM, Holtzman DM (2009) The role of apolipoprotein E in Alzheimer's disease. Neuron 63: 287–303. doi: 10.1016/j.neuron.2009.06.026
    [37] Choi J, Gao J, Kim J, et al. (2015) The E3 ubiquitin ligase Idol controls brain LDL receptor expression, ApoE clearance, and Aβ amyloidosis. Sci Transl Med 7: 314ra184.
    [38] Tosto G, Reitz C (2013) Genome-wide association studies in Alzheimer's disease: A review. Curr Neurol Neurosci Rep 13: 381. doi: 10.1007/s11910-013-0381-0
    [39] Yu JT, Tan L (2012) The role of clusterin in Alzheimer's disease: pathways, pathogenesis and therapy. Mol Neurobiol 45: 314–326. doi: 10.1007/s12035-012-8237-1
    [40] Hong I, Kang T, Yoo Y, et al. (2013) Quantitative proteomic analysis of the hippocampus in the 5XFAD mouse model at early stages of Alzheimer's disease pathology. J Alzheimers Dis 36: 321–334. doi: 10.3233/JAD-130311
    [41] Puglielli L, Konopka G, Pack-Chung E, et al. (2001) Acyl-coenzyme A: cholesterol acyltransferase modulates the generation of the amyloid β-peptide. Nat Cell Biol 3: 905–912. doi: 10.1038/ncb1001-905
    [42] Sjögren M, Mielke M, Gustafson D, et al. (2006) Cholesterol and Alzheimer's disease-is there a relation? Mech Ageing Dev 127: 138–147. doi: 10.1016/j.mad.2005.09.020
    [43] Cortes-Canteli M, Zamolodchikov D, Ahn HJ, et al. (2012) Fibrinogen and altered hemostasis in Alzheimer's disease. J Alzheimers Dis 32: 599–608. doi: 10.3233/JAD-2012-120820
    [44] Ahn HJ, Zamolodchikov D, Cortes-Canteli M, et al. (2010) Alzheimer's disease peptide β-amyloid interacts with fibrinogen and induces its oligomerization. Proc Natl Acad Sci U S A 107: 21812–21817. doi: 10.1073/pnas.1010373107
    [45] Li X, Buxbaum JN (2011) Transthyretin and the brain re-visited: Is neuronal synthesis of transthyretin protective in Alzheimer's disease? Mol Neurodegener 6: 79. doi: 10.1186/1750-1326-6-79
    [46] Flynn JM, Melov S (2013) SOD2 in mitochondrial dysfunction and neurodegeneration. Free Radic Biol Med 62: 4–12. doi: 10.1016/j.freeradbiomed.2013.05.027
    [47] De Leo ME, Borrello S, Passantino M, et al. (1998) Oxidative stress and overexpression of manganese superoxide dismutase in patients with Alzheimer's disease. Neurosci Lett 250: 173–176. doi: 10.1016/S0304-3940(98)00469-8
    [48] Manavalan A, Mishra M, Sze SK, et al. (2013) Brain-site-specific proteome changes induced by neuronal P60TRP expression. Neurosignals 21: 129–149. doi: 10.1159/000343672
    [49] Sun KH, Chang KH, Clawson S, et al. (2011) Glutathione‐S‐transferase P1 is a critical regulator of Cdk5 kinase activity. J Neurochem 118: 902–914. doi: 10.1111/j.1471-4159.2011.07343.x
    [50] Luo J, Wärmländer SKTS, Gräslund A, et al. (2014) Non-chaperone proteins can inhibit aggregation and cytotoxicity of Alzheimer amyloid β peptide. J Biol Chem 289: 27766–27775. doi: 10.1074/jbc.M114.574947
    [51] Habib KL, Lee TCM, Yang J (2010) Inhibitors of catalase-amyloid interactions protect cells from β-amyloid-induced oxidative stress and toxicity. J Biol Chem 285: 38933–38943. doi: 10.1074/jbc.M110.132860
    [52] Cocciolo A, Di Domenico F, Coccia R, et al. (2012) Decreased expression and increased oxidation of plasma haptoglobin in Alzheimer disease: Insights from redox proteomics. Free Radic Biol Med 53: 1868–1876. doi: 10.1016/j.freeradbiomed.2012.08.596
    [53] Spagnuolo MS, Maresca B, La Marca V, et al. (2014) Haptoglobin interacts with apolipoprotein E and beta-amyloid and influences their crosstalk. ACS Chem Neurosci 5: 837–847. doi: 10.1021/cn500099f
    [54] Poynton AR, Hampton MB (2014) Peroxiredoxins as biomarkers of oxidative stress. Biochim Biophys Acta 1840: 906–912. doi: 10.1016/j.bbagen.2013.08.001
  • neurosci-06-04-299-s001.pdf
  • This article has been cited by:

    1. Hou Tiantian, Jia Wei, Zhao Yang, Chuan Qin, Huiyu Zhou, 2024, Enhanced histogram of oriented lines for palmprint recognition, 9781510681514, 117, 10.1117/12.3035349
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4744) PDF downloads(548) Cited by(6)

Figures and Tables

Figures(5)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog