
Citation: Yonghong Zhong, Richard I.D. Harris, Shuhong Deng. The spillover effects among offshore and onshore RMB exchange rate markets, RMB Hibor market[J]. Quantitative Finance and Economics, 2020, 4(2): 294-309. doi: 10.3934/QFE.2020014
[1] | Bauyrzhan Rakhadilov, Rinat Kussainov, Aisulu Kalitova, Zarina Satbayeva, Aibek Shynarbek . The impact of technological parameters of electrolytic-plasma treatment on the changes in the mechano-tribological properties of steel 45. AIMS Materials Science, 2024, 11(4): 666-683. doi: 10.3934/matersci.2024034 |
[2] | Bauyrzhan Rakhadilov, Lyaila Bayatanova, Sherzod Kurbanbekov, Ravil Sulyubayev, Nurdaulet Shektibayev, Nurbol Berdimuratov . Investigation on the effect of technological parameters of electrolyte-plasma cementation method on phase structure and mechanical properties of structural steel 20X. AIMS Materials Science, 2023, 10(5): 934-947. doi: 10.3934/matersci.2023050 |
[3] | Vinda Puspasari, Satrio Herbirowo, Alvin Muhammad Habieb, Dedi Pria Utama, Rahadian Roberto, Bintang Adjiantoro . Effect of sub-zero treatments on hardness and corrosion properties of low-alloy nickel steel. AIMS Materials Science, 2023, 10(1): 55-69. doi: 10.3934/matersci.2023004 |
[4] | Mochamad Asrofi, Mochammad Agus Vian Hidayatulloh, Gaguk Jatisukamto, Hary Sutjahjono, Rahma Rei Sakura . The effect of temperature and volume fraction of mahoni (Swietenia mahogani) wood charcoal on SS400 steel using pack carburizing method: Study of hardness and microstructure characteristics. AIMS Materials Science, 2020, 7(3): 354-363. doi: 10.3934/matersci.2020.3.354 |
[5] | Falko Böttger-Hiller, Klaus Nestler, Henning Zeidler, Gunther Glowa, Thomas Lampke . Plasma electrolytic polishing of metalized carbon fibers. AIMS Materials Science, 2016, 3(1): 260-269. doi: 10.3934/matersci.2016.1.260 |
[6] | Pavan Hiremath, Gowrishankar M. C., Manjunath Shettar, Sathyashankara Sharma, Jayashree P. K., Suhas Kowshik . Influence of normalizing post carburizing treatment on microstructure, mechanical properties and fracture behavior of low alloy gear steels. AIMS Materials Science, 2021, 8(5): 836-851. doi: 10.3934/matersci.2021051 |
[7] | Alexandre Lavrov, Benjamin Werner, Anna Stroisz, Thomas Monge Øia, Kamila Gawel, Malin Torsæter, Narjes Jafariesfad . Effect of electric field on push-out strength of cemented steel pipes. AIMS Materials Science, 2021, 8(3): 373-389. doi: 10.3934/matersci.2021024 |
[8] | Bauyrzhan Rakhadilov, Ainur Zhassulan, Daryn Baizhan, Aibek Shynarbek, Kuanysh Ormanbekov, Tamara Aldabergenova . The effect of the electrolyte composition on the microstructure and properties of coatings formed on a titanium substrate by microarc oxidation. AIMS Materials Science, 2024, 11(3): 547-564. doi: 10.3934/matersci.2024027 |
[9] | Marcin Kargul, Marek Konieczny . Copper matrix composites reinforced with steel particles. AIMS Materials Science, 2021, 8(3): 321-342. doi: 10.3934/matersci.2021021 |
[10] | Fabian Pöhl, Corinna Hardes, Werner Theisen . Scratch behavior of soft metallic materials. AIMS Materials Science, 2016, 3(2): 390-403. doi: 10.3934/matersci.2016.2.390 |
With the dynamic development of sensor technology, small and low-cost wearable devices are widely deployed in the medical and health fields. However, wearable devices cannot deploy sophisticated security authentication mechanisms due to limited computing and storage capacity [1], which puts the privacy of wearable device owners at risk [2]. In recent years, biometric-based methods have become the preferred method of identification. Currently, mobile devices have implemented identification systems based on biometric features, such as the face [3] and fingerprints [4]. However, some of these biometric techniques are intrusive to users and require complex hardware that are not conducive to their application in wearable devices. Therefore, biometrics based on non-intrusive and fewer additional hardware systems, such as gait recognition, have attracted a lot of attention from academia and industry [5].
For gait recognition based on wearable devices, it is more common to use inertial sensors. A sensor-based gait recognition method was first proposed by Ailisto et al. [6]. The acceleration signal corresponding to the waist during walking was collected and gait recognition was performed using template matching and cross-correlation calculation. Based on their research, Rong et al. [7] used the dynamic time warping (DTW) algorithm for gait-curve matching. In 2019, Sun et al. [8] proposed a speed-adaptive gait cycle segmentation method and an adaptive matching threshold generation method. The experimental results showed that their method achieved a 96.9% user recognition accuracy. However, the robustness of the template matching method for gait recognition in complex environments is poor. With the development of artificial intelligence technology, gait recognition methods based on machine learning, such as support vector machine (SVM) [9] and k-Nearest Neighbor (KNN) [10], are increasing. Although machine learning methods can be effective for gait recognition, it is labor-intensive and time-consuming.
In recent years, deep learning-based gait recognition methods have become popular [11,12,13,14,15,16]. In 2016, Gadaleta et al. [11] used convolutional neural networks (CNNs) as feature extractors for gait recognition for the first time. They designed an IDNet user recognition framework based on a CNN and one-class SVM. The results of this experiment showed that the CNN can automatically learn gait features and obtain better recognition performance. In 2018, Ruben et al. [12] proposed an end-to-end method based on deep learning, which uses information from multiple sensors as the input of a single-channel CNN separately and fuses the extraction results. They conducted experiments on the OU-ISIR dataset, and the accuracy of identity recognition using their method improved from 83.3% to 94.8%. In 2020, Zou et al. [13] used smartphones to collect accelerometer and gyroscope data while walking in a field environment for recognition and authentication. Then, they proposed a method that used a deep neural network that combined a CNN and long short-term memory (LSTM). Identification tests were performed on a dataset containing 118 subjects, and an accuracy of 93.7% was obtained. To extract the temporal features of the gait, Tran et al. [14] proposed a new multi-model LSTM network which has six channels. They put each signal data from a group of continuous signals into one of the channels of one LSTM, respectively. Then, they constructed a hybrid architecture which combined the LSTM network and the CNN network. The experimental results showed that the identification accuracy was 94.15% for the whuGait dataset. In 2021, Middya et al. [15] proposed a new deep CNN model for privacy protected user identification. They evaluated the model on a real-world benchmark dataset and achieved an accuracy of 98.8%. Although these methods can achieve better recognition performance, the models have a large number of model parameters and high memory usage, which is not suitable for wearable devices [16].
In order to improve the performance of gait identification based on wearable devices and reduce the memory size and improve robustness of the model, we propose a multimodal fusion method based on a double-channel depthwise separable convolutional neural network (DC-DSCNN). We improve the model by reducing the model complexity to make it suitable for wearable devices. In addition, we improve the robustness of gait recognition by fusing the time series data and Gramian angular field (GAF) images. The experimental results show that the method proposed in this paper has high classification accuracy and small memory size.
The main contributions of this paper are as follows:
1) To evaluate the performance of models for gait recognition, we build one gait dataset which is collected from 24 subjects. Each subject provides 6 samples and each sample contains thousands of data points. Then, the gait cycles are divided according to the sample data. We selected 100 gait cycles from each subject dataset. Therefore, we have a total of 2400 gait cycle samples.
2) The new lightweight DSCNN network based on depthwise separable convolution is proposed in this paper which can automatically extract features and the number of parameters is small. In addition, this model can increase recognition accuracy while reducing model memory requirements.
3) A multimodal fusion DC-DSCNN network based on the time series data and GAF images of acceleration and angular velocity is proposed which can automatically learn gait features and realize gait identification.
The rest of the paper is organized as follows. Section 2 describes the process of gait cycle data acquisition, extraction, and processing in the HD Dataset based on wearable devices. We also briefly introduce the basic principles of GAF and depthwise separable convolution (DSC) in this section. The details of the proposed model are also presented in this section. The experimental results are given in Section 3. Finally, the conclusion and future work are discussed in Section 4.
In this paper, a miniature wireless inertial-magnetic motion tracker (MTw), which includes a 3D accelerometer and a 3D gyroscope, was used to collect gait data. The three-axis accelerometer obtains the axial acceleration by measuring the force of the wearable device in a certain axis (X, Y, or Z). Meanwhile, acceleration measured by the accelerometer can reflect movement of the wearable device user. The working principle of the three-axis gyroscope is to measure the angle between the vertical axis of the gyroscope rotor and the device in a three-dimensional coordinate system, and calculate the angular velocity. Therefore, the gyroscope can capture the angular velocity by measuring its own rotational state, which also can determine the user's movement state. To improve the recognition accuracy, a method of fusing acceleration and angular velocity data was adopted.
Considering identification without interfering with normal activities, wearable sensors should be designed to be lighter, smaller, and easier to wear. This paper selected the MTw (Figure 1) designed by Xsens as the data collection unit [17]. It has a size of 47 mm × 30 mm × 13 mm and its weights 16 g. The MTw was placed in the right of the waist and collected the acceleration and angular velocity data of the waist with a sampling frequency of 100 Hz. The MTw wirelessly transmits acceleration and angular velocity data to the PC in real time based on the Awinda Station connected to a recording PC. The process of collecting gait data is illustrated in Figure 1.
In this study, nine male and fifteen female volunteers were recruited via random sampling to collect their gait data during walking. All the subjects were informed of the purpose of the experiment, and data collection was performed with verbal consent. The age of volunteers was in the range of 22 to 60 years, and the height was between 1.58 and 1.87 meters, and the average weight was 45.5 kg. All the volunteers exhibited no abnormal gait during the experiment. In the experiment, the subjects' shoes and walking environment were free. For the homogeneity of the experimental data, the MTw was worn on the right side of the waist of all subjects, the Z-axis was aligned with the direction of gravity, the X-axis represented the forward direction, and the Y-axis indicated the horizontal direction. Subjects were asked to walk for one minute on a specified path at their own walking speed and repeat this six times. The inertial signal data were recorded at the beginning of the walk, and the data were saved after completing one path cycle, thus each volunteer has six gait samples. The data in the dataset were collected at a sampling frequency of 100 Hz and processed by a Kalman filter. For gait identification, all collected data were processed using the method described in Section 2.3. We constructed the HD Dataset by selecting 100 gait cycle segments from each subject dataset. The gait cycle segments were then labeled according to the volunteer's number. After matching all gait cycle segments and labels, we split the dataset into training and testing sets with the ratio of 7:3 by using a random shuffle method. There are 1680 training samples and 720 test samples. The detailed information of these datasets is shown in Table 1.
Dataset | Usage | Number of subjects | Samples for Training | Samples for Test |
HD Dataset | Identification | 24 | 1, 680 | 720 |
whuGait Dataset #1 | Identification | 118 | 33, 104 | 3, 740 |
Zou et al. [13] constructed the whuGait dataset by collecting gait data using a smartphone's inertial sensor under unrestricted conditions in the field, and 118 subjects' gait data were collected. Each sample in the dataset contains 3-axis acceleration data and 3-axis angular velocity data with a sampling frequency of 50 Hz. Four datasets were constructed for gait recognition and two datasets can be used for gait authentication. In this study, Dataset #1 was selected to validate the gait recognition performance of the model. It contains 33, 104 training samples and 3, 740 test samples, and each subsample contains two gait cycles. Table 1 shows the details of Dataset #1.
The movement of moving the body forward using a series of repetitive limb movements while ensuring stability is known as walking. During the forward movement of the body, one lower limb acts as a source of support and the other one swings forward and becomes the next new source of support when the heel hits the ground. Subsequently, the two lower limbs keep exchanging roles until they reach the destination. The completion of a single sequence from a heel landing to landing again with one lower limb is referred to as a gait cycle [18]. A sample of gait data is shown in Figure 2, which includes acceleration (a) and angular velocity (b) curves.
From Figure 2 it can be seen that the gait data have a certain periodicity. The period stability of the gait signal on the Z-axis is better than that on the X-axis and Y-axis. Therefore, the gait signal on the Z-axis is usually used to extract the gait cycle. Experimental analysis [19] has shown that the acceleration in the direction of gravity reaches the maximum value when the moment of heel landing and its minimum value corresponds to the initial force point when the foot stomp forward. By analyzing the gait data of the subjects, it was found that the gait cycle which is divided by the maximum value of the acceleration in the Z-axis was more stable in terms of the shape of the data curve and the number of sample points than the minimum value. Therefore, the location of the maximum value point of the acceleration in the Z-axis is used as the basis for dividing the gait cycle segment in this experiment.
Extracting the gait cycle is one of the key steps which affects the recognition accuracy of gait recognition systems. Techniques based on threshold or peak detection are the most widely utilized for gait cycle detection [20]. In this study, we use the maximum value of the gait acceleration signal in the direction of human gravity (Z-axis) as fundamental and used an algorithm combining peak detection and dynamic time warping (DTW) to segment the gait acceleration and angular velocity signal to achieve automatic segmentation of the gait cycle. Meanwhile, the time of each step in normal uniform walking is about 0.4 to 0.6 seconds. The sampling frequency of the MTw is 100 Hz, so that each gait cycle contains 80 to 120 sample points. The specific algorithm steps are as follows:
First traverse all the sample points of the Z-axis of the acceleration signal and detect all the maximum points. The principle of maximum value detection can be simply described as in the following formula:
xi−1<xi&xi>xi+1 | (1) |
where xi is the sample point at the current moment, xi-1 is the sampling points at the previous moment, and xi+1 is the sample points at the next moment.
Second, the pseudo-extreme points are removed according to a threshold value. The maximum points of the Z-axis of the acceleration data smaller than 9.8 m/s 2 are removed because the acceleration of a person's foot falling to the ground when walking should be greater than the acceleration of gravity. In addition, the time interval between two consecutive local maximum points should be between 0.4 and 0.6 seconds. The maximum value points that do not satisfy this requirement are removed. Figure 3 shows the results of maximum value point detection.
Finally, the gait data are divided into cycle segments according to the index value corresponding to the maximum point of acceleration.
To obtain the typical gait cycle segments which have a similar waveform trend, we utilize the DTW algorithm to match the similarity and filter out those gait cycles with insignificant periodicity and similarity. DTW [21] is a method to measure the similarity between two unequal long time series. It calculates the distance between two time series by lengthening and shortening them. If the distance of two time series is beyond the threshold, they are deemed as dissimilar. Since there are some disturbing gaits during walking, these should be eliminated. The gait cycle with significant periodicity is selected as a template. The DTW algorithm is used to calculate the similarity between the remaining segments and the template, and then gait cycle segments whose similarity exceeds a threshold are removed. Therefore, a series of gait cycle segments with similar waveform of the same subject are extracted and these have similar feature values which facilitates gait recognition.
A user's walking speed will vary under the influence of different scenarios and external factors. Even in the same scene, the walking speed will be slightly affected by different factors. This study uses interpolation to process samples of gait cycles at different walking speeds. We use cubic linear interpolation to process all gait cycle segments and turn each gait cycle segment into a sample with 128 sample points. Therefore, the inertial signal sequence in each gait cycle will have 128 sample points after interpolation, regardless of the subject's stride. This is conducive to the later data processing and gait recognition. Figure 4(a) shows the Z-axis acceleration curve of the same subject for four gait cycles. Figure 4(b) shows gait cycles of the Z-axis acceleration curves of four different subjects. From Figure 4 we can see that the fluctuation trend of the sample curve of the same subject is roughly the same, and the fluctuation trend of the sample curve of different subjects has certain differences, so that it can be identified according to the gait cycle segmentation samples.
In wearable device-based gait recognition, the accelerometer and gyroscope are the most commonly used sensors. Most of the raw gait data collected by wearable sensors are one-dimensional time series. Although deep learning methods (e.g., 1D CNN, LSTM, etc.) can process one-dimensional time series data while preserving nonlinear features, their correlation on the time series is not fully considered. Wang et al. [22] proposed to preserve the time dependence and correlation of the original data by converting one-dimensional time series into two-dimensional images using the GAF algorithm. The GAF algorithm represents the time series through a polar coordinate system instead of a Cartesian coordinate system, which evolved from the Gram matrix. The specific implementation of the GAF algorithm steps [22] are as follows:
Given a time series X = {x1, x2, …xN} which contains N real-valued observations; first, normalization is conducted to ensure that X is in the range [-1, 1] as follows:
˜xii−1=(xi−max(X))+(xi−min(X))max(X)−min(X) | (2) |
Next, the normalized one-dimensional time series is used to preserve the absolute time relationship of the series using polar coordinates, encoding the values as angular cosines and the timestamps as radius, which can be expressed by the following equation:
{φi=arccos(~xi),−1≤~xi≤1,~xi∈˜Xri=tiN,ti∈N | (3) |
where ˜xi is any observation in X, ti is the timestamp corresponding to the time series X, and N is the total length of the timestamp. When converted to the polar coordinate system, the cosine values of the data obtained by the above normalization operation in the range [-1, 1] fall into angular bounds [0, π].
Finally, after transforming the rescaled time series into the polar coordinate system by calculating the sum of the trigonometric function between each sample point, the temporal correlation among different time intervals is identified from the perspective of the angle. The Gramian Angular Summation Field (GASF) is defined as follows:
GASF=(cos(φ1+φ1)cos(φ1+φ2)⋯cos(φ1+φn)cos(φ2+φ1)cos(φ2+φ2)⋯cos(φ2+φn)⋮cos(φn+φ1)cos(φn+φ2)⋯cos(φn+φn)) | (4) |
Briefly, there are three steps to convert a one-dimensional time series to a two-dimensional image using the GAF algorithm: scaling, coordinate system conversion, and trigonometric functions. Figure 5 shows the complete process of transforming the rescaled time series into an encoded map in polar coordinates. The time series is normalized using Eqs (2) and (3) converts it to the polar coordinate system. Finally, the image can be obtained by using the GASF (Eq (4)).
Inspired by the MobileNet model [23], our model mainly utilizes the depthwise separable convolution in the convolution layer. The depthwise separable convolution [24] consists of two parts: depth convolution and point convolution, which is primarily used to extract features. When feature extraction is performed on multi-channel inputs, one filter of the depth convolution corresponds to one input channel. Therefore, intermediate features of multiple channels can be obtained by the convolution operation. The pointwise convolution applies multiple 1 × 1 convolution kernels to the intermediate features to perform standard convolution operations and obtain multiple outputs with the same height and width as the input image. These outputs are combined on the channel axes to produce the final output. Using depthwise separable convolution has the effect of significantly reducing the number of parameters and computational cost, thus further improving the recognition efficiency. Figure 6 shows the convolution process for DSC.
The size of the input image is assumed to be H × W × C, where H, W, and C are the height, width, and number of channels of the input image, respectively. The deep convolution is performed for each channel using a convolution kernel of size 3 × 3, and the parameters of the deep convolution are calculated as follows:
Ndepthwise=H×W×C×3×3 | (5) |
For pointwise convolution, the output channels of the feature maps generated by deep convolution are expanded by M convolution kernels of size 1×1. The cost of computing the parameters of pointwise convolution is:
Npointwise=H×W×C×1×1×M | (6) |
Therefore, the computational volume of the depthwise separable convolution is the weighted value of the depthwise convolution and the pointwise convolution:
Nseparable=Ndepthwise+Npointwise=H×W×C×(3×3+M) | (7) |
For standard convolution, the parameters are computed as:
Nstandard=H×W×C×3×3×M | (8) |
By comparing Eqs (7) and (8), we can see that the computation of parameters for the depthwiseseparable convolution is reduced by (M × 9)/(M + 9) times compared to the standard convolution. Therefore, the computational parameters of the DSCNN network based on DSC can also be reduced and the gait recognition accuracy can be improved.
In this paper a gait recognition method using multi-modal fusion is proposed, and a DC-DSCNN structure is designed. As shown in Figure 7, the two channels perform feature extraction on the time series data corresponding to the acceleration and angular velocity of gait and the GAF image, respectively. After this, the fusion of features is performed and the recognition results are output. The time series data and image will be separately trained with DSCNN, which can obtain the feature space of the gait data in different modalities and facilitate the optimal gradient descent during training. Thus, the feature information of the gait data can be better extracted.
The network structure and parameters of the DSCNN model are shown in Table 2. The DSC in DSCNN is used to extract features. Three DSC layers containing different numbers of convolution kernels (32, 64, and 128) are used in the proposed model. A batch normalization layer is added between each DSC layer and the activation function layer. This is used to normalize the feature maps generated by the DSC layers. The purpose of this operation is to prevent gradient disappearance [25]. ReLU is chosen as the activation function that will compensate for the expressive deficiency of the linear model and mitigate the overfitting phenomenon. In addition, the max pooling layer with kernel of size 1 × 2 and stride of 2 is added after each the activation layer. The DSCNN module can be denoted as in Eq (9).
Y(H′,W′,C′)=MaxP(δN(DS(F(H,W,C)))) | (9) |
Layer Name | Kernel Size | Kernel Num. | Stride | Feature Map |
DSConv1 | 1 × 3 | 32 | 1 | 1 × 128 × 32 |
BN | / | / | / | 1 × 128 × 32 |
Activation | / | / | / | 1 × 128 × 32 |
Pool1 | 1 × 2 | / | 2 | 1 × 64 × 32 |
DSConv2 | 1 × 3 | 64 | 1 | 1 × 64 × 64 |
BN | / | / | / | 1 × 64×64 |
Activation | / | / | / | 1 × 64 × 64 |
Pool2 | 1 × 2 | / | 2 | 1 × 32 × 64 |
DSConv3 | 1 × 3 | 128 | 1 | 1 × 32 × 128 |
BN | / | / | / | 1 × 32 × 128 |
Activation | / | / | / | 1 × 32 × 128 |
Pool3 | 1 × 2 | / | 2 | 1 × 16 × 128 |
where F represents the feature map of size H × W × C. Dsrepresents the depthwise separable convolution operation. The kernel size of the DSC layer is set to 1 × 3. Batch normalization and the activation function are indicated by δN. MaxP is max pooling to reduce the size of the feature maps.
Global average pooling (GAP) is appended after the last module. It is a combination of two processes of the fully connected layer (FCL), where the feature map is expanded into multiple feature matrices and classified, thus eliminating the intermediate connection weight parameters and reducing a large number of parameters. The final layer is a softmax classifier that obtains the probability distribution associated with the input sample and outputs the classification results.
For gait recognition with the time series and GAF image, a multimodal fusion gait recognition method based on DC-DSCNN is proposed. The gait recognition process is shown in Figure 7. The multimodal fusion process combines information from the time series and GAF images of gait data to improve the accuracy of the prediction results and the robustness of the prediction model.
The identification process using the GAF image is as follows. The gait cycle segments are extracted from the time series acquired by the accelerometer or gyroscope, and then the one-dimensional time series are converted into a two-dimensional image matrix using the GAF algorithm. Taking the data acquired by the three-axis gyroscope as an example, the length of the gait cycle segment is given as S. Since each sample contains three channels, the dimension of each sample is S × 3. By using the GAF algorithm, the three-dimensional time series is converted into a two-dimensional image with three channels. In this study, the size of the GAF image is 128 × 128 × 3. This will obtain a GAF image in the same format as an ordinary RGB image. After this, the GAF images obtained from the angular velocity data are input to the DSCNN model. The result of the classification is the label corresponding to the highest probability after feature extraction. Experimental comparative results for gait recognition based on acceleration and angular velocity are presented in Section 3.3.
To improve the recognition rate and robustness, the time series and GAF images of gait cycles are used as inputs for two channels. For the upper layer network architecture, the gait cycle time series data are transferred to the DSCNN model, and after a series of feature extraction operations, the results are fed into the GAP layer. For the lower layer network, the time series data of the gait cycle are first converted into GAF images and then the same operation as the upper layer network is performed. Finally, the feature extraction results of both channels are added in a fusion layer to produce a joint feature vector. Then, the joint feature vector is fed into the softmax classifier. The classification label corresponding to the obtained maximum probability is the gait identification result.
Wang et al. [26] proposed a new CNN-based image recognition method in which an exponential linear unit (ELU) [27] was applied, and obtained higher recognition accuracy than state-of-the-art methods. Hence, we chose ReLU [28] and ELU as the activation function to train the DC-DSCNN model. The equation for ReLU is as follows:
fReLU=max(0,x)={x,x≥00,x<0 | (10) |
ReLU can avoid the gradient saturation phenomenon and speed up the training process, but it has a problem that the gradient is 0 when x ≤ 0. ELU solves this problem by reducing the effect of bias offset, which brings the normal gradient closer to the unit natural gradient.
fELU={γ(ex−1),x≤0x, x>0 | (11) |
In order to verify the classification performance of the proposed DC-DSCNN network, we need to introduce the evaluation indicators. For gait recognition using deep learning, classifier performance can be measured by computing accuracy, precision, recall, and F1-score. The accuracy is the ratio of the number of samples correctly classified by the classifier to the total number of samples for a given test dataset. A higher accuracy indicates better predictive ability of the model. The F1-score is used to balance precision and recall, with higher values representing better classifier performance. However, gait identification is a multi-classification task and cannot directly use the F1-score. The simplest approach is to calculate the macro-F1 score. The macro-F1 score calculates the precision and recall for each category, and then calculates the average. The above evaluation metrics are shown in Eqs (12) to (16), where true positive, true negative, false positive, and false negative are represented by TP, TN, FP, and FN, respectively.
Accuracy=TP+TNTP+TN+FP+FN | (12) |
Precision=TPTP+FP | (13) |
Recall=TPTP+FN | (14) |
F1score=2×Precision×RecallPrecision+Recall | (15) |
Macro F1 score=1M∑Mi=1F1−scorei | (16) |
In this experiment, the performance of the method proposed in this paper is evaluated with time series data. We conducted experiments on two datasets on a PC with an i5-8400 CPU and 8 GB RAM. In order to verify the effectiveness of our proposed method and that it is lightweight, we compared the performance of the gait recognition model using six methods. To ensure the consistency of the experiments, the optimizers were all changed to Adam, the learning rate was set to 0.001, and the number of epochs for training was 200. The experimental results for using different methods based on the HD Dataset are shown in Table 3.
Method | Accuracy | Macro-F1 score | Parameters Num | Memory Size |
IdNet [11] | 98.33% | 0.9832 | 26, 284 | 452 KB |
CNN+LSTM [13] | 98.61% | 0.9865 | 4, 716, 406 | 56.7 MB |
CNN+LSTM [14] | 75.97% | 0.7215 | - | - |
deep CNN [15] | 98.33% | 0.9836 | 3, 013, 704 | 34.8 MB |
DSCNN(FCL) | 96.81% | 0.9695 | 4, 208, 246 | 45.8 MB |
DSCNN(GAP) | 98.89% | 0.9894 | 15, 566 | 532 KB |
Our results show that the existing gait recognition methods all have good recognition accuracy on the HD Dataset. This indicates that the different user's gait data collected using the accelerometer and gyroscope in the wearable device are distinguishable. In addition, to compare the difference between the last layer of the network using the FCL and the GAP layer, we constructed two gait recognition models based on the DSCNN module combined with FCL or GAP, respectively. The experimental results show that the model using GAP has a 2.08% higher recognition accuracy and a 99.6% smaller number of parameters than FCL. Meanwhile, the accuracy and macro-F1 score obtained using the DSCNN model are higher than those obtained using the existing methods. Although the accuracy of the DSCNN model is 0.28% better than CNN+LSTM, the memory size is reduced by at least 36 M.
The performance of gait recognition based on the different models for Dataset #1 is also evaluated. We selected five methods [11,13,14,15,16] to compare with our method. The results are shown in Table 4. From the table, we can see that our DSCNN network achieved better recognition accuracy than the other methods used in studies [13,14,15,16]. In Dataset #1, our model achieved an accuracy of 94.44%. Compared to [13], our method improved the accuracy by 0.92%. In [14], a lightweight model called CNN+CEDS was proposed in order to reduce the complexity of the model. Although the recognition accuracy of our model is lower than that of CNN-CEDS, the memory size of our model is much lower, only 532 KB.
Method | Accuracy | Macro-F1 score | Memory Size |
CNN+LSTM [13] | 93.52% | - | 56.7 MB |
CNN & LSTM [14] | 94.15% | - | - |
deep CNN [15] | 90.64% | 88.06% | 36.4 MB |
CNN-CEDS [16] | 94.71% | 93.98% | 4.24 MB |
DSCNN(GAP) | 94.44% | 93.42% | 532 KB |
In this part, we compare the performance of the DC-DSCNN model proposed in this paper with single modal and multimodal fusion. As described in Section 2.4, GAF images are generated in our method. A series of experiments are conducted to demonstrate the effectiveness of multimodal fusion based on DC-DSCNN in this section.
In order to evaluate the performance using GAF images, four cases were investigated: only using images extracted by acceleration, only using images extracted by angular velocity, using GAF images corresponding to acceleration and angular velocity as inputs for two channels, and GAF images with acceleration and angular velocity data fusion. The experimental results are shown in Table 5. The accuracy of using GAF images extracted by acceleration and angular velocity in the DSCNN model are 96.11 and 90.14%, respectively. The recognition performance is improved when GAF images of acceleration and angular velocity are input to two channels. However, the memory size of the model is increased. The performance of the method with GAF images with fused acceleration and angular velocity data is better than other methods; the accuracy reaches 97.64%.
Input Data | Accuracy | |
Acceleration | 96.11% | |
Angular velocity | 90.14% | |
Acceleration and Angular velocity | 96.39% | |
Acceleration and Angular velocity fusion | 97.64% |
Therefore, we select the GAF images with fused acceleration and angular velocity data as the input to the DC-DSCNN model. We performed comparative experiments using only the time series of acceleration and angular velocity or only GAF images and using them both as inputs to DC-DSCNN. We can see from Table 6 that multimodal data fusion as input is better than the single modal. The accuracy and the macro-F1 score of multimodal fusion based on DC-DSCNN achieve at least 99.31 and 99.29%. Additionally, When the activation function of the ReLU is replaced by ELU, the accuracy is improved to 99.58%.
Model Input | Accuracy | Macro-F1 score |
Time Series | 98.89% | 98.94% |
GAF Image | 97.64% | 97.56% |
Time Series + GAF (ReLU) | 99.31% | 99.29% |
Time Series + GAF (ELU) | 99.58% | 99.52% |
In this paper, we proposed a multimodal fusion gait recognition method based on DC-DSCNN. First, we proposed a gait cycle segmentation method combining peak detection and DTW and made the HD Dataset. Then, a DSCNN is proposed to extract features from the time series of acceleration and angular velocity. By comparing the DSCNN model with six other models, the experimental results show that our proposed method not only has 98.89% recognition accuracy, but also the model only occupies 532 KB of memory. In addition, in order to preserve the time dependence of the original time series, we adopted the GAF algorithm to convert the time series into two-dimensional format. We compared the model performance of GAF images using only acceleration and GAF images using only angular velocity, and using a combination of both. The results reveal that the GAF images combining acceleration and angular velocity have better recognition results, at least 1.53 and 7.5% higher than the other two. Finally, we proposed a multimodal fusion method that uses the time series and GAF images as inputs to the DC-DSCNN model which can be helpful in gait-based identification. The recognition accuracy on the HD Dataset can achieve 99.58% with a recognition time of 1.7 s. In this study, we classified the subjects' identities with gait data obtained from a wearable device. The proposed framework can effectively recognize identity based on gait data. In the future, we intend to transfer the model to wearable devices for real-time gait recognition.
The authors would like to thank all the colleagues that have supported this work. This work is jointly supported by Natural Science Foundation of Hebei Province (No.F2021201002);Natural Science Foundation of Hebei Province (No.F2021201005);Science and Technology Project of Hebei Education Department (No.ZD2020146); Postdoctoral Scientific Research Project of Hebei Province (No.B2019005001); Key Research and Development Program of Baoding Science and Technology Bureau (No.1911Q001); The Program for Top 80 Innovative Talents in Colleges and Universities of Hebei Province (No.SLRC2017022); National Key Research and Development Program of China (No.2017YFB1401200).
The authors declare there is no conflict of interest.
[1] |
Bai J, Perron P (1998) Estimating and testing linear models with multiple structural changes. Econometrica: 47-78. doi: 10.2307/2998540
![]() |
[2] |
Chen L, Du Z, Tan Y (2019) Sustainable exchange rates in China: Is there the heterogeneous effect of economic policy uncertainty? Green Financ 1: 346-363. doi: 10.3934/GF.2019.4.346
![]() |
[3] |
Cheung YW, Rime D (2014) The offshore renminbi exchange rate: Microstructure and links to the onshore market. J Int Money Financ 49: 170-189. doi: 10.1016/j.jimonfin.2014.05.012
![]() |
[4] |
Engle RF, Kroner KF (1995) Multivariate Simultaneous Generalized ARCH. Econometric Theor 11: 122-150. doi: 10.1017/S0266466600009063
![]() |
[5] |
Fatum R, Pedersen J, Sørensen PN (2013) The intraday effects of central bank intervention on exchange rate spreads. J Int Money Financ 33: 103-117. doi: 10.1016/j.jimonfin.2012.10.006
![]() |
[6] |
Funke M, Shu C, Cheng X, et al. (2015) Assessing the CNH-CNY pricing differential: Role of fundamentals, contagion and policy. J Int Money Financ 59: 245-262. doi: 10.1016/j.jimonfin.2015.07.008
![]() |
[7] | HKEX (2017) The Liquidity Provision Mechanism for Offshore RMB Market-Current Status, Impact and Possible Improvements. Research Report, No. 2017, Hong Kong Exchanges and Clearing Limited, Hong Kong. |
[8] |
Hung-Gay F, Leung WK, Jiang ZHU (2004) Nondeliverable forward market for Chinese RMB: A first look. China Economic Rev 15: 348-352. doi: 10.1016/j.chieco.2004.03.004
![]() |
[9] | Kou C, Kong L (2014) The Effect of CNH Market on Relationship of RMB Spot Exchange Rate and NDF. In Proceedings of the Seventh International Conference on Management Science and Engineering Management, ed. Xu J., Fry J., and Lev B., Hajiyev, 1387-1394. Berlin Heidelberg: Springer. |
[10] | Leung D, Fu J (2014) Interactions between CNY and CNH Money and Forward Exchange Markets. Working Papers No. 13/2014, Hong Kong Institute for Monetary Research, Hong Kong. |
[11] |
Li Z, Dong H, Huang Z, et al. (2018) Asymmetric effects on risks of Virtual Financial Assets (VFAs) in different regimes: A Case of Bitcoin. Quant Financ Econ 2: 860-883. doi: 10.3934/QFE.2018.4.860
![]() |
[12] | Li Z, Liao G, Albitar K (2019) Does corporate environmental responsibility engagement affect firm value? The mediating role of corporate innovation. Bus Strateg Environ, [in press]. |
[13] | Li Z, Zhong J (2019) Impact of economic policy uncertainty shocks on China's financial conditions. Financ Res Lett, [in press]. |
[14] |
Liang Y, Shi K, Wang L, et al. (2019) Fluctuation and reform: A tale of two RMB markets. China Economic Rev 53: 30-52. doi: 10.1016/j.chieco.2018.08.003
![]() |
[15] | Maziad S, Kang JS (2012) RMB Internationalization: Onshore/Offshore Links. Working paper No. 12-133, International Monetary Fund, Washington DC. |
[16] |
McKinnon R, Schnabl G (2009) The case for stabilizing China's exchange rate: Setting the stage for fiscal expansion. China World Econ 17: 1-32. doi: 10.1111/j.1749-124X.2009.01128.x
![]() |
[17] |
Owyong D, Wong WK, Horowitz I (2015) Cointegration and Causality among the Onshore and Offshore Markets for China's Currency. J Asian Econ 41: 20-38. doi: 10.1016/j.asieco.2015.10.004
![]() |
[18] |
Peng W, Shu C, Yip R (2007) Renminbi derivatives: recent development and issues. China World Econ 15: 1-17. doi: 10.1111/j.1749-124X.2007.00082.x
![]() |
[19] |
Schwarz G (1978) Estimating the Dimension of a Model. Ann Stat 6: 461-464. doi: 10.1214/aos/1176344136
![]() |
[20] | Shen J (2014) The Price Discovery and Volatility Spillover Effects of RMB Exchange Rates between Onshore and Offshore Markets. Shanghai Financ 405: 84-86. (in Chinese). |
[21] |
Shibata R (1976) Selection of the Order of an Autoregressive Model by Akaike's Information Criterion. Biometrika 63: 117-126. doi: 10.1093/biomet/63.1.117
![]() |
[22] |
Takaishi T (2018) Volatility Estimation Using A Rational GARCH Model. Quant Financ Econ 2: 127-136. doi: 10.3934/QFE.2018.1.127
![]() |
[23] | Wu G, Pei C (2012) The Quantitative Study of Onshore and Offshore Exchange Rate. J Financ Res 387: 62-73. (in Chinese). |
[24] | Wu Z, Chen X (2013) Dynamic Correlation of RMB Exchange Rate between Onshore and Offshore Market Based on MGARCH-BEKK Model-Evidence from Hong Kong Offshore RMB Market after Its Establishment. Forum World Econ Polit 261: 110-123. (in Chinese). |
[25] |
Xu HC, Zhou WX, Sornette D (2017) Time-dependent lead-lag relationship between the onshore and offshore Renminbi exchange rates. J Int Financ Markets Inst Money 49: 173-183. doi: 10.1016/j.intfin.2017.05.001
![]() |
[26] | Yan J, Sun L, Huang W (2015) The Linkage Effects of Interest Rate and Exchange Rate in Hong Kong RMB Offshore Market. Financ Forum 236: 57-65. (in Chinese). |
[27] |
Yang J, Leatham DJ (2001) Currency Convertibility and Linkage between Chinese Official and Swap Market Exchange Rates. Contem Econ Pol 19: 347-359. doi: 10.1093/cep/19.3.347
![]() |
[28] | Zhou J (2005) The Researches on the Test Power and Features on the Lagging Number Selecting Criteria about the Time Series Models. Sys Engin Theor Pract 24: 22-29. (in Chinese). |
1. | Md. Al Mehedi Hasan, Fuad Al Abir, Md. Al Siam, Jungpil Shin, Gait Recognition With Wearable Sensors Using Modified Residual Block-Based Lightweight CNN, 2022, 10, 2169-3536, 42577, 10.1109/ACCESS.2022.3168019 | |
2. | Sakorn Mekruksavanich, Anuchit Jitpattanakul, RNN-based deep learning for physical activity recognition using smartwatch sensors: A case study of simple and complex activity recognition, 2022, 19, 1551-0018, 5671, 10.3934/mbe.2022265 | |
3. | Jiayu Fu, Haiyan Wang, Risu Na, A JISAIHAN, Zhixiong Wang, Yuko OHNO, Recent advancements in digital health management using multi-modal signal monitoring, 2023, 20, 1551-0018, 5194, 10.3934/mbe.2023241 | |
4. | Nahian Rifaat, Utshab Kumar Ghosh, Abu Sayeed, Accurate gait recognition with inertial sensors using a new FCN-BiLSTM architecture, 2022, 104, 00457906, 108428, 10.1016/j.compeleceng.2022.108428 | |
5. | Muqing Deng, Zebang Zhong, Yi Zou, Yanjiao Wang, Kaiwei Wang, Junrong Liao, Human Gait Recognition Based on Frontal-View Walking Sequences Using Multi-modal Feature Representations and Learning, 2024, 56, 1573-773X, 10.1007/s11063-024-11554-8 | |
6. | Mengxue Yan, Ming Guo, Jianqiang Sun, Jianlong Qiu, Xiangyong Chen, Gait Recognition in Different Terrains with IMUs Based on Attention Mechanism Feature Fusion Method, 2023, 55, 1370-4621, 10215, 10.1007/s11063-023-11324-y | |
7. | Hehao Liu, Dong Li, Ming Zhang, Jun Wan, Shuang Liu, Hanying Zhu, Qinghua Liu, A Cross-Modal Semantic Alignment and Feature Fusion Method for Bionic Drone and Bird Recognition, 2024, 16, 2072-4292, 3121, 10.3390/rs16173121 | |
8. | Lipeng Qin, Ming Guo, Kun Zhou, Jianqiang Sun, Xiangyong Chen, Jianlong Qiu, Gait Recognition Based on Two-Stream CNNs With Multisensor Progressive Feature Fusion, 2024, 24, 1530-437X, 13676, 10.1109/JSEN.2024.3373100 | |
9. | Yingrui Geng, Menghao Yuan, Xiaoxu Wen, Yan Wang, Lin Meng, 2024, Distributed Learning for Gait-based Human Recognition, 979-8-3315-1063-3, 37, 10.1109/IIKI65561.2024.00016 |
Dataset | Usage | Number of subjects | Samples for Training | Samples for Test |
HD Dataset | Identification | 24 | 1, 680 | 720 |
whuGait Dataset #1 | Identification | 118 | 33, 104 | 3, 740 |
Layer Name | Kernel Size | Kernel Num. | Stride | Feature Map |
DSConv1 | 1 × 3 | 32 | 1 | 1 × 128 × 32 |
BN | / | / | / | 1 × 128 × 32 |
Activation | / | / | / | 1 × 128 × 32 |
Pool1 | 1 × 2 | / | 2 | 1 × 64 × 32 |
DSConv2 | 1 × 3 | 64 | 1 | 1 × 64 × 64 |
BN | / | / | / | 1 × 64×64 |
Activation | / | / | / | 1 × 64 × 64 |
Pool2 | 1 × 2 | / | 2 | 1 × 32 × 64 |
DSConv3 | 1 × 3 | 128 | 1 | 1 × 32 × 128 |
BN | / | / | / | 1 × 32 × 128 |
Activation | / | / | / | 1 × 32 × 128 |
Pool3 | 1 × 2 | / | 2 | 1 × 16 × 128 |
Method | Accuracy | Macro-F1 score | Parameters Num | Memory Size |
IdNet [11] | 98.33% | 0.9832 | 26, 284 | 452 KB |
CNN+LSTM [13] | 98.61% | 0.9865 | 4, 716, 406 | 56.7 MB |
CNN+LSTM [14] | 75.97% | 0.7215 | - | - |
deep CNN [15] | 98.33% | 0.9836 | 3, 013, 704 | 34.8 MB |
DSCNN(FCL) | 96.81% | 0.9695 | 4, 208, 246 | 45.8 MB |
DSCNN(GAP) | 98.89% | 0.9894 | 15, 566 | 532 KB |
Input Data | Accuracy | |
Acceleration | 96.11% | |
Angular velocity | 90.14% | |
Acceleration and Angular velocity | 96.39% | |
Acceleration and Angular velocity fusion | 97.64% |
Model Input | Accuracy | Macro-F1 score |
Time Series | 98.89% | 98.94% |
GAF Image | 97.64% | 97.56% |
Time Series + GAF (ReLU) | 99.31% | 99.29% |
Time Series + GAF (ELU) | 99.58% | 99.52% |
Dataset | Usage | Number of subjects | Samples for Training | Samples for Test |
HD Dataset | Identification | 24 | 1, 680 | 720 |
whuGait Dataset #1 | Identification | 118 | 33, 104 | 3, 740 |
Layer Name | Kernel Size | Kernel Num. | Stride | Feature Map |
DSConv1 | 1 × 3 | 32 | 1 | 1 × 128 × 32 |
BN | / | / | / | 1 × 128 × 32 |
Activation | / | / | / | 1 × 128 × 32 |
Pool1 | 1 × 2 | / | 2 | 1 × 64 × 32 |
DSConv2 | 1 × 3 | 64 | 1 | 1 × 64 × 64 |
BN | / | / | / | 1 × 64×64 |
Activation | / | / | / | 1 × 64 × 64 |
Pool2 | 1 × 2 | / | 2 | 1 × 32 × 64 |
DSConv3 | 1 × 3 | 128 | 1 | 1 × 32 × 128 |
BN | / | / | / | 1 × 32 × 128 |
Activation | / | / | / | 1 × 32 × 128 |
Pool3 | 1 × 2 | / | 2 | 1 × 16 × 128 |
Method | Accuracy | Macro-F1 score | Parameters Num | Memory Size |
IdNet [11] | 98.33% | 0.9832 | 26, 284 | 452 KB |
CNN+LSTM [13] | 98.61% | 0.9865 | 4, 716, 406 | 56.7 MB |
CNN+LSTM [14] | 75.97% | 0.7215 | - | - |
deep CNN [15] | 98.33% | 0.9836 | 3, 013, 704 | 34.8 MB |
DSCNN(FCL) | 96.81% | 0.9695 | 4, 208, 246 | 45.8 MB |
DSCNN(GAP) | 98.89% | 0.9894 | 15, 566 | 532 KB |
Method | Accuracy | Macro-F1 score | Memory Size |
CNN+LSTM [13] | 93.52% | - | 56.7 MB |
CNN & LSTM [14] | 94.15% | - | - |
deep CNN [15] | 90.64% | 88.06% | 36.4 MB |
CNN-CEDS [16] | 94.71% | 93.98% | 4.24 MB |
DSCNN(GAP) | 94.44% | 93.42% | 532 KB |
Input Data | Accuracy | |
Acceleration | 96.11% | |
Angular velocity | 90.14% | |
Acceleration and Angular velocity | 96.39% | |
Acceleration and Angular velocity fusion | 97.64% |
Model Input | Accuracy | Macro-F1 score |
Time Series | 98.89% | 98.94% |
GAF Image | 97.64% | 97.56% |
Time Series + GAF (ReLU) | 99.31% | 99.29% |
Time Series + GAF (ELU) | 99.58% | 99.52% |