Research article Special Issues

Research on motion recognition based on multi-dimensional sensing data and deep learning algorithms

  • Received: 28 April 2023 Revised: 12 June 2023 Accepted: 24 June 2023 Published: 05 July 2023
  • Motion recognition provides movement information for people with physical dysfunction, the elderly and motion-sensing games production, and is important for accurate recognition of human motion. We employed three classical machine learning algorithms and three deep learning algorithm models for motion recognition, namely Random Forests (RF), K-Nearest Neighbors (KNN) and Decision Tree (DT) and Dynamic Neural Network (DNN), Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). Compared with the Inertial Measurement Unit (IMU) worn on seven parts of body. Overall, the difference in performance among the three classical machine learning algorithms in this study was insignificant. The RF algorithm model performed best, having achieved a recognition rate of 96.67%, followed by the KNN algorithm model with an optimal recognition rate of 95.31% and the DT algorithm with an optimal recognition rate of 94.85%. The performance difference among deep learning algorithm models was significant. The DNN algorithm model performed best, having achieved a recognition rate of 97.71%. Our study validated the feasibility of using multidimensional data for motion recognition and demonstrated that the optimal wearing part for distinguishing daily activities based on multidimensional sensing data was the waist. In terms of algorithms, deep learning algorithms based on multi-dimensional sensors performed better, and tree-structured models still have better performance in traditional machine learning algorithms. The results indicated that IMU combined with deep learning algorithms can effectively recognize actions and provided a promising basis for a wider range of applications in the field of motion recognition.

    Citation: Jia-Gang Qiu, Yi Li, Hao-Qi Liu, Shuang Lin, Lei Pang, Gang Sun, Ying-Zhe Song. Research on motion recognition based on multi-dimensional sensing data and deep learning algorithms[J]. Mathematical Biosciences and Engineering, 2023, 20(8): 14578-14595. doi: 10.3934/mbe.2023652

    Related Papers:

  • Motion recognition provides movement information for people with physical dysfunction, the elderly and motion-sensing games production, and is important for accurate recognition of human motion. We employed three classical machine learning algorithms and three deep learning algorithm models for motion recognition, namely Random Forests (RF), K-Nearest Neighbors (KNN) and Decision Tree (DT) and Dynamic Neural Network (DNN), Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). Compared with the Inertial Measurement Unit (IMU) worn on seven parts of body. Overall, the difference in performance among the three classical machine learning algorithms in this study was insignificant. The RF algorithm model performed best, having achieved a recognition rate of 96.67%, followed by the KNN algorithm model with an optimal recognition rate of 95.31% and the DT algorithm with an optimal recognition rate of 94.85%. The performance difference among deep learning algorithm models was significant. The DNN algorithm model performed best, having achieved a recognition rate of 97.71%. Our study validated the feasibility of using multidimensional data for motion recognition and demonstrated that the optimal wearing part for distinguishing daily activities based on multidimensional sensing data was the waist. In terms of algorithms, deep learning algorithms based on multi-dimensional sensors performed better, and tree-structured models still have better performance in traditional machine learning algorithms. The results indicated that IMU combined with deep learning algorithms can effectively recognize actions and provided a promising basis for a wider range of applications in the field of motion recognition.



    加载中


    [1] M. E. Harrington, R. W. Daniel, P. J. Kyberd, A measurement system for the recognition of arm gestures using accelerometers, Proceed. Institut. Mechan. Eng. Part H J. Eng. Med., 209 (1995), 129–134. https://doi.org/10.1243/pime_proc_1995_209_330_02 doi: 10.1243/pime_proc_1995_209_330_02
    [2] C. H. Chang, C. H. Yeh, C. C. Chang, Y. C. Lin, Interactive somatosensory games in rehabilitation training for older adults with mild cognitive impairment: Usability study, JMIR Serious Games, 10 (2022), e38465. https://doi.org/10.2196/38465 doi: 10.2196/38465
    [3] T. H. Liu, W. H. Chen, Y. Shih, Y. C. Lin, C. Yu, T.Y. Shiang, Better position for the wearable sensor to monitor badminton sport training loads, Sports Biomechan., 2021), 1–13. https://doi.org/10.1080/14763141.2021.1875033 doi: 10.1080/14763141.2021.1875033
    [4] I. Pernek, G. Kurillo, G. Stiglic, R. Bajcsy, Recognizing the intensity of strength training exercises with wearable sensors, J. Biomed. Inform., 58 (2015), 145–155. https://doi.org/10.1016/j.jbi.2015.09.020 doi: 10.1016/j.jbi.2015.09.020
    [5] Y. F. Liu, Z. T. Li, L. H. Xiao, S. K. Zheng, P. C. Cai, H. F. Zhang, et al., FDO-Calibr: Visual-aided IMU calibration based on frequency-domain optimization, Measurement Sci. Technol., 34 (2023). https://doi.org/10.1088/1361-6501/acadfb doi: 10.1088/1361-6501/acadfb
    [6] J. W. Cui, Z. G. Li, H. Du, B. Y. Yan, P. D. Lu, Recognition of upper limb action intention based on IMU, Sensors, 22 (2022), 1954. https://doi.org/10.3390/s22051954 doi: 10.3390/s22051954
    [7] J. W. Cui, Z. G. Li, Prediction of upper limb action intention based on long short-term memory neural network, Electronics, 11 (2022), 1320. https://doi.org/10.3390/electronics11091320 doi: 10.3390/electronics11091320
    [8] A. Sarker, D. R. Emenonye, A. Kelliher, T. Rikakis, R. M. Buehrer, A. T. Asbeck, Capturing upper body kinematics and localization with low-cost sensors for rehabilitation applications, Sensors, 22 (2022), 2300. https://doi.org/10.3390/s22062300 doi: 10.3390/s22062300
    [9] S. T. Boerema, L. van Velsen, L. Schaake, T. M. Tonis, H. J. Hermens, Optimal Sensor Placement for Measuring Physical Activity with a 3D Accelerometer, Sensors, 14 (2014), 3188–3206. https://doi.org/10.3390/s140203188 doi: 10.3390/s140203188
    [10] C. S. Xia, Y. Sugiura, Wearable accelerometer layout optimization for activity recognition based on swarm intelligence and user preference, IEEE Access, 9 (2021), 166906–166919. https://doi.org/10.1109/access.2021.3134262 doi: 10.1109/access.2021.3134262
    [11] W. Zhuang, Y. Chen, J. Su, B. W. Wang, C. M. Gao, Design of human activity recognition algorithms based on a single wearable IMU sensor, Int. J. Sensor Networks, 30 (2019), 193–206. https://doi.org/10.1504/ijsnet.2019.100218 doi: 10.1504/ijsnet.2019.100218
    [12] Q. Q. Li, Y. G. Liu, J. J. Zhu, Z. Chen, L. Liu, S. M. Yang, et al., Upper-limb motion recognition based on hybrid feature selection: algorithm development and validation, JMIR Mhealth Uhealth, 9 (2021), e24402. https://doi.org/10.2196/24402 doi: 10.2196/24402
    [13] Y. J. Zhao, J. Zhao, L. Ding, C. C. Xie, Big data energy consumption monitoring technology of obese individuals based on MEMS sensor, Wireless Commun. Mobile Comput., 2021 (2021), 1–11. https://doi.org/10.1155/2021/4923804 doi: 10.1155/2021/4923804
    [14] V. Bijalwan, V. B. Semwal, V. Gupta, Wearable sensor-based pattern mining for human activity recognition: deep learning approach, Industr. Robot Int. J. Robot. Res. Appl., 49 (2022), 21–33. https://doi.org/10.1108/ir-09-2020-0187 doi: 10.1108/ir-09-2020-0187
    [15] M. Jaen-Vargas, K. M. Reyes Leiva, F. Fernandes, S. Barroso Goncalves, M. Tavares Silva, D. S. Lopes, et al., Effects of sliding window variation in the performance of acceleration-based human activity recognition using deep learning models, PeerJ Computer Sci., 8 (2022), e1052. https://doi.org/10.7717/peerj-cs.1052 doi: 10.7717/peerj-cs.1052
    [16] W. Qi, N. Wang, H. Su, A. Aliverti, DCNN based human activity recognition framework with depth vision guiding, Neurocomputing, 486 (2022), 261–271. https://doi.org/10.1016/j.neucom.2021.11.044 doi: 10.1016/j.neucom.2021.11.044
    [17] H. T. T. Vu, H. L. Cao, D. B. Dong, T. Verstraten, J. Geeroms, B. Vanderborght, Comparison of machine learning and deep learning-based methods for locomotion mode recognition using a single inertial measurement unit, Front. Neurorobot., 16 (2022), 209. https://doi.org/10.3389/fnbot.2022.923164 doi: 10.3389/fnbot.2022.923164
    [18] D. Hoareau, G. Jodin, P. A. Chantal, S. Bretin, J. Prioux, F. Razan, Synthetized inertial measurement units (IMUs) to evaluate the placement of wearable sensors on human body for motion recognition, J. Eng., 2022 (2022), 536–543. https://doi.org/10.1049/tje2.12137 doi: 10.1049/tje2.12137
    [19] A. Narayanan, T. Stewart, L. Mackay, A dual-accelerometer system for detecting human movement in a free-living environment, Med. Sci. Sports Exercise, 52 (2020), 252–258. https://doi.org/10.1249/MSS.0000000000002107 doi: 10.1249/MSS.0000000000002107
    [20] W. H. Kong, L. L. He, H. L. Wang, Exploratory data analysis of human activity recognition based on smart phone, IEEE Access, 9 (2021), 73355–73364. https://doi.org/10.1109/access.2021.3079434 doi: 10.1109/access.2021.3079434
    [21] H. S. Altman, An introduction to kernel and nearest-neighbor nonparametric regression, Am. Statist., 46 (1992), 175–185. https://doi.org/10.2307/2685209 doi: 10.2307/2685209
    [22] J. R. Quinlan, Learning decision tree classifiers, ACM Comput. Surveys, 28 (1996), 71–72. https://doi.org/10.1145/234313.234346 doi: 10.1145/234313.234346
    [23] D. R. Cutler, T. C. Edwards, K. H. Beard, A. Cutler, K. T. Hess, Random forests for classification in ecology, Ecology, 88 (2007), 2783–2792. https://doi.org/10.1890/07-0539.1 doi: 10.1890/07-0539.1
    [24] Z. W. Li, F. Liu, W. J. Yang, S. H. Peng, J. Zhou, A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects, IEEE Transact. Neural Networks Learn. Syst., 33 (2022), 6999–7019. https://doi.org/10.1109/tnnls.2021.3084827 doi: 10.1109/tnnls.2021.3084827
    [25] J. S. Han, E. G. Im, Implementation of Individual Gait Recognition using RNN, KⅡSE Transact. Comput. Pract., 24 (2018), 358–362. https://doi.org/10.5626/ktcp.2018.24.7.358 doi: 10.5626/ktcp.2018.24.7.358
    [26] G. Montavon, W. Samek, K. R. Muller, Methods for interpreting and understanding deep neural networks, Digital Signal Process., 73 (2018), 1–15. https://doi.org/10.1016/j.dsp.2017.10.011 doi: 10.1016/j.dsp.2017.10.011
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1887) PDF downloads(92) Cited by(3)

Article outline

Figures and Tables

Figures(9)  /  Tables(6)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog