Review

Mathematical and computational perspectives on next-generation neural networks for sign language recognition: A systematic review of advances, challenges, and assistive applications

  • Published: 09 February 2026
  • MSC : 68T07, 68T45

  • Artificial intelligence (AI) and machine learning (ML) have revolutionized assistive technologies, particularly for individuals with hearing and speech impairments. This systematic review critically examines recent innovations in next-generation neural network architectures for sign language recognition (SLR), emphasizing their mathematical and computational foundations. Following PRISMA guidelines, we analyze state-of-the-art models, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), long short-term memory (LSTM), and hybrid approaches integrating classical machine learning methods such as support vector machines (SVMs). We explore strategies for feature extraction, data augmentation, multimodal fusion, and optimization, highlighting their roles in improving accuracy, robustness, and real-time adaptability. Persistent challenges include dataset scarcity, limited generalizability, and computational trade-offs. From a mathematical perspective, optimization techniques, probabilistic modeling, and explainable AI frameworks are emerging as key enablers for safe and trustworthy SLR systems. This review identifies research gaps and proposes future directions toward responsible, mathematically grounded, and computationally efficient AI-powered assistive technologies.

    Citation: Yahia Said, Mohammad Barr, Yazan A. Alsariera, Ahmed A. Alsheikhy. Mathematical and computational perspectives on next-generation neural networks for sign language recognition: A systematic review of advances, challenges, and assistive applications[J]. AIMS Mathematics, 2026, 11(2): 3839-3902. doi: 10.3934/math.2026156

    Related Papers:

  • Artificial intelligence (AI) and machine learning (ML) have revolutionized assistive technologies, particularly for individuals with hearing and speech impairments. This systematic review critically examines recent innovations in next-generation neural network architectures for sign language recognition (SLR), emphasizing their mathematical and computational foundations. Following PRISMA guidelines, we analyze state-of-the-art models, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), long short-term memory (LSTM), and hybrid approaches integrating classical machine learning methods such as support vector machines (SVMs). We explore strategies for feature extraction, data augmentation, multimodal fusion, and optimization, highlighting their roles in improving accuracy, robustness, and real-time adaptability. Persistent challenges include dataset scarcity, limited generalizability, and computational trade-offs. From a mathematical perspective, optimization techniques, probabilistic modeling, and explainable AI frameworks are emerging as key enablers for safe and trustworthy SLR systems. This review identifies research gaps and proposes future directions toward responsible, mathematically grounded, and computationally efficient AI-powered assistive technologies.



    加载中


    [1] T. Kim, J. Keane, W. R. Wang, H. Tang, J. Riggle, G. Shakhnarovich, et al., Lexicon-free fingerspelling recognition from video: data, models, and signer adaptation, Comput. Speech Lang., 46 (2017), 209–232. https://doi.org/10.1016/j.csl.2017.05.009 doi: 10.1016/j.csl.2017.05.009
    [2] M. A. Ahmed, B. B. Zaidan, A. A. Zaidan, M. M. Salih, M. M. B. Lakulu, A review on systems-based sensory gloves for sign language recognition state of the art between 2007 and 2017, Sensors, 18 (2018), 2208. https://doi.org/10.3390/s18072208 doi: 10.3390/s18072208
    [3] R. P. Cui, H. Liu, C. S. Zhang, A deep neural framework for continuous sign language recognition by iterative training, IEEE T. Multimedia, 21 (2019), 1880–1891. https://doi.org/10.1109/TMM.2018.2889563 doi: 10.1109/TMM.2018.2889563
    [4] A. A. Hosain, P. S. Santhalingam, P. Pathak, J. Košecká, H. Rangwala, Sign language recognition analysis using multimodal data, 2019 IEEE International Conference on Data Science and Advanced Analytics (DSAA), Washington, DC, USA, 2019, 203–210. https://doi.org/10.1109/DSAA.2019.00035
    [5] A. C. Duarte, Cross-modal neural sign language translation, In: Proceedings of the 27th ACM International Conference on Multimedia, New York: Association for Computing Machinery, 2019, 1650–1654. https://doi.org/10.1145/3343031.3352587
    [6] M. J. Cheok, Z. Omar, M. H. Jaward, A review of hand gesture and sign language recognition techniques, Int. J. Mach. Learn. & Cyber., 10 (2019), 131–153. https://doi.org/10.1007/s13042-017-0705-5 doi: 10.1007/s13042-017-0705-5
    [7] Q. K. Xiao, Y. D. Zhao, W. Huan, Multi-sensor data fusion for sign language recognition based on dynamic Bayesian network and convolutional neural network, Multimed. Tools Appl., 78 (2019), 15335–15352. https://doi.org/10.1007/s11042-018-6939-8 doi: 10.1007/s11042-018-6939-8
    [8] E. K. Kumar, P. V. V. Kishore, M. T. K. Kumar, D. A. Kumar, 3D sign language recognition with joint distance and angular coded color topographical descriptor on a 2-stream CNN, Neurocomputing, 372 (2020), 40–54. https://doi.org/10.1016/j.neucom.2019.09.059 doi: 10.1016/j.neucom.2019.09.059
    [9] J. Wu, R. Jafari, Wearable computers for sign language recognition, In: Handbook of large-scale distributed computing in smart healthcare, Cham: Springer, 2017, 379–401. https://doi.org/10.1007/978-3-319-58280-1_14
    [10] J. C. Shang, J. Wu, A robust sign language recognition system with multiple Wi-Fi devices, In: Proceedings of the Workshop on Mobility in the Evolving Internet Architecture, New York: Association for Computing Machinery, 2017, 19–24. https://doi.org/10.1145/3097620.3097624
    [11] J. F. Pu, W. G. Zhou, H. Q. Li, Iterative alignment network for continuous sign language recognition, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, 4160–4169. https://doi.org/10.1109/CVPR.2019.00429
    [12] Q. K. Xiao, M. Y. Qin, Y. T. Yin, Skeleton-based Chinese sign language recognition and generation for bidirectional communication between deaf and hearing people, Neural Networks, 125 (2020), 41–55. https://doi.org/10.1016/j.neunet.2020.01.030 doi: 10.1016/j.neunet.2020.01.030
    [13] W. Aly, S. Aly, S. Almotairi, User-independent American sign language alphabet recognition based on depth image and PCANet features, IEEE Access, 7 (2019), 123138–123150. https://doi.org/10.1109/access.2019.2938829 doi: 10.1109/access.2019.2938829
    [14] Z. Zafrulla, H. Brashear, T. Starner, H. Hamilton, P. Presti, American sign language recognition with the kinect, In: Proceedings of the 13th international conference on multimodal interfaces, New York: Association for Computing Machinery, 2011, 279–286. https://doi.org/10.1145/2070481.2070532
    [15] S. J. Wei, X. Chen, X. D. Yang, S. Cao, X. Zhang, A component-based vocabulary-extensible sign language gesture recognition framework, Sensors, 16 (2016), 556. https://doi.org/10.3390/s16040556 doi: 10.3390/s16040556
    [16] N. B. Ibrahim, H. Zayed, M. Selim, Advances, challenges and opportunities in continuous sign language recognition, Journal of Engineering and Applied Sciences, 15 (2019), 1205–1227. https://doi.org/10.36478/jeasci.2020.1205.1227 doi: 10.36478/jeasci.2020.1205.1227
    [17] L. H. Zheng, B. Liang, A. L. Jiang, Recent advances of deep learning for sign language recognition, 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Sydney, NSW, Australia, 2017, 1–7. https://doi.org/10.1109/dicta.2017.8227483
    [18] T. Starner, J. Weaver, A. Pentland, Real-time american sign language recognition using desk and wearable computer based video, IEEE T. Pattern Anal., 20 (1998), 1371–1375. https://doi.org/10.1109/34.735811 doi: 10.1109/34.735811
    [19] F.-S. Chen, C.-M. Fu, C.-L. Huang, Hand gesture recognition using a real-time tracking method and hidden Markov models, Image Vision Comput., 21 (2003), 745–758. https://doi.org/10.1016/s0262-8856(03)00070-2 doi: 10.1016/s0262-8856(03)00070-2
    [20] Y. Lecun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, P. IEEE, 86 (1998), 2278–2324. https://doi.org/10.1109/5.726791 doi: 10.1109/5.726791
    [21] E. Escobedo, L. Ramirez, G. Camara, Dynamic sign language recognition based on convolutional neural networks and texture maps, 2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Rio de Janeiro, Brazil, 2019, 265–272. https://doi.org/10.1109/SIBGRAPI.2019.00043
    [22] S. Hayani, M. Benaddy, O. El Meslouhi, M. Kardouchi, Arab sign language recognition with convolutional neural networks, 2019 International Conference of Computer Science and Renewable Energies (ICCSRE), Agadir, Morocco, 2019, 1–4. https://doi.org/10.1109/iccsre.2019.8807586
    [23] Y. Q. Liao, P. W. Xiong, W. D. Min, W. Q. Min, J. H. Lu, Dynamic sign language recognition based on video sequence with BLSTM-3D residual networks, IEEE Access, 7 (2019), 38044–38054. https://doi.org/10.1109/access.2019.2904749 doi: 10.1109/access.2019.2904749
    [24] P. Witoonchart, P. Chongstitvatana, Application of structured support vector machine backpropagation to a convolutional neural network for human pose estimation, Neural Networks, 92 (2017), 39–46. https://doi.org/10.1016/j.neunet.2017.02.005 doi: 10.1016/j.neunet.2017.02.005
    [25] G. Marin, F. Dominio, P. Zanuttigh, Hand gesture recognition with leap motion and kinect devices, 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 2014, 1565–1569. https://doi.org/10.1109/ICIP.2014.7025313
    [26] S. Stoll, N. C. Camgoz, S. Hadfield, R. Bowden, Text2Sign: towards sign language production using neural machine translation and generative adversarial networks, Int. J. Comput. Vis., 128 (2020), 891–908. https://doi.org/10.1007/s11263-019-01281-2 doi: 10.1007/s11263-019-01281-2
    [27] S. M. He, Research of a sign language translation system based on deep learning, 2019 International Conference on Artificial Intelligence and Advanced Manufacturing (AIAM), Dublin, Ireland, 2019, 392–396. https://doi.org/10.1109/aiam48774.2019.00083
    [28] Y. Yang, D. Ramanan, Articulated pose estimation with flexible mixtures-of-parts, CVPR 2011, Colorado Springs, CO, USA, 2011, 1385–1392. https://doi.org/10.1109/CVPR.2011.5995741
    [29] P. Q. Thang, N. T. Thuy, H. T. Lam, The SVM, SimpSVM and RVM on sign language recognition problem, 2017 Seventh International Conference on Information Science and Technology (ICIST), Da Nang, Vietnam, 2017, 398–403. https://doi.org/10.1109/ICIST.2017.7926792
    [30] S. Badillo, B. Banfai, F. Birzele, I. I. Davydov, L. Hutchinson, T. Kam‐Thong, et al., An introduction to machine learning, Clin. Pharmacol. Ther., 107 (2020), 871–885. https://doi.org/10.1002/cpt.1796 doi: 10.1002/cpt.1796
    [31] A. Wadhawan, P. Kumar, Sign language recognition systems: A decade systematic literature review, Arch. Computat. Methods Eng., 28 (2021), 785–813. https://doi.org/10.1007/s11831-019-09384-2 doi: 10.1007/s11831-019-09384-2
    [32] Y. S. Abu-Mostafa, M. Magdon-Ismail, H.-T. Lin, Learning from data, New York: AMLBook, 2012.
    [33] Y. Lecun, Y. Bengio, G. Hinton, Deep learning, Nature, 521 (2015), 436–444. https://doi.org/10.1038/nature14539 doi: 10.1038/nature14539
    [34] D. Konstantinidis, K. Dimitropoulos, P. Daras, A deep learning approach for analyzing video and skeletal features in sign language recognition, 2018 IEEE International Conference on Imaging Systems and Techniques (IST), Krakow, Poland, 2018, 1–6. https://doi.org/10.1109/IST.2018.8577085
    [35] L. Rioux-Maldague, P. Giguère, Sign language fingerspelling classification from depth and color images using a deep belief network, 2014 IEEE Canadian Conference on Computer and Robot Vision, 2014, 92–97. https://doi.org/10.1109/CRV.2014.20 doi: 10.1109/CRV.2014.20
    [36] S. Aly, B. Osman, W. Aly, M. Saber, Arabic sign language fingerspelling recognition from depth and intensity images, 2016 12th International Computer Engineering Conference (ICENCO), Cairo, Egypt, 2016, 99–104. https://doi.org/10.1109/ICENCO.2016.7856452
    [37] O. Koller, H. Ney, R. Bowden, Deep hand: How to train a cnn on 1 million hand images when your data is continuous and weakly labelled, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, 3793–3802. https://doi.org/10.1109/CVPR.2016.412
    [38] N. C. Camgoz, S. Hadfield, O. Koller, R. Bowden, Subunets: End-to-end hand shape and continuous sign language recognition, In: 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017, 3075–3084. https://doi.org/10.1109/iccv.2017.332
    [39] S. Yang, Q. Zhu, Video-based Chinese sign language recognition using convolutional neural network, 2017 IEEE 9th international conference on communication software and networks (ICCSN), Guangzhou, China, 2017, 929–934. https://doi.org/10.1109/iccsn.2017.8230247
    [40] S. Wang, D. Guo, W. G. Zhou, Z. J. Zha, M. Wang, Connectionist temporal fusion for sign language translation, In: Proceedings of the 26th ACM international conference on Multimedia, New York: Association for Computing Machinery, 2018, 1483–1491. https://doi.org/10.1145/3240508.3240671
    [41] A. Balayn, H. Brock, K. Nakadai, Data-driven development of virtual sign language communication agents, 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 2018, 370–377. https://doi.org/10.1109/ROMAN.2018.8525717
    [42] G. A. Rao, P. V. V. Kishore, Selfie sign language recognition with multiple features on adaboost multilabel multiclass classifier, Journal of Engineering Science and Technology, 13 (2018), 2352–2368.
    [43] M. C. Ariesta, F. Wiryana, Suharjito, A. Zahra, Sentence level Indonesian sign language recognition using 3D convolutional neural network and bidirectional recurrent neural network, 2018 Indonesian Association for Pattern Recognition International Conference (INAPR), Jakarta, Indonesia, 2018, 16–22. https://doi.org/10.1109/inapr.2018.8627016
    [44] D. Konstantinidis, K. Dimitropoulos, P. Daras, Sign language recognition based on hand and body skeletal data, 2018-3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Helsinki, Finland, 2018, 1–4. https://doi.org/10.1109/3dtv.2018.8478467
    [45] X. W. Jiang, Y. D. Zhang, Chinese sign language fingerspelling via six-layer convolutional neural network with leaky rectified linear units for therapy and rehabilitation, J. Med. Imag. Health In., 9 (2019), 2031–2090. https://doi.org/10.1166/jmihi.2019.2804 doi: 10.1166/jmihi.2019.2804
    [46] H. B. D. Nguyen, H. N. Do, Deep learning for american sign language fingerspelling recognition system, 2019 26th International Conference on Telecommunications (ICT), Hanoi, Vietnam, 2019, 314–318. https://doi.org/10.1109/ict.2019.8798856
    [47] S. Ameen, S. Vadera, A convolutional neural network to classify American Sign Language fingerspelling from depth and colour images, Expert Syst., 34 (2017), e12197. https://doi.org/10.1111/exsy.12197 doi: 10.1111/exsy.12197
    [48] O. K. Oyedotun, A. Khashman, Deep learning in vision-based static hand gesture recognition, Neural Comput. & Applic., 28 (2017), 3941–3951. https://doi.org/10.1007/s00521-016-2294-8 doi: 10.1007/s00521-016-2294-8
    [49] C. Zimmermann, T. Brox, Learning to estimate 3D hand pose from single rgb images, 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017, 4913–4921. https://doi.org/10.1109/ICCV.2017.525
    [50] D. Wu, L. Shao, Multimodal dynamic networks for gesture recognition, In: Proceedings of the 22nd ACM international conference on Multimedia, New York: Association for Computing Machinery, 2014, 945–948. https://doi.org/10.1145/2647868.2654969
    [51] J. Huang, W. G. Zhou, H. Q. Li, W. P. Li, Sign language recognition using 3d convolutional neural networks, 2015 IEEE International Conference on Multimedia and Expo (ICME), Turin, Italy, 2015, 1–6. https://doi.org/10.1109/icme.2015.7177428
    [52] A. Tang, K. Lu, Y. F. Wang, J. Huang, H. Q. Li, A real-time hand posture recognition system using deep neural networks, ACM T. Intel. Syst. Tec., 6 (2015), 1–23. https://doi.org/10.1145/2735952 doi: 10.1145/2735952
    [53] K. H. Li, Z. Y. Zhou, C. H. Lee, Sign transition modeling and a scalable solution to continuous sign language recognition for real-world applications, ACM T. Access. Comput., 8 (2016), 1–23. https://doi.org/10.1145/2850421 doi: 10.1145/2850421
    [54] J. Huang, W. G. Zhou, H. Q. Li, W. P. Li, Sign language recognition using real-sense, 2015 IEEE China summit and international conference on signal and information processing (ChinaSIP), Chengdu, China, 2015, 166–170. https://doi.org/10.1109/chinasip.2015.7230384
    [55] D. Guo, W. G. Zhou, A. Y. Li, H. Q. Li, M. Wang, Hierarchical recurrent deep fusion using adaptive clip summarization for sign language translation, IEEE T. Image Process., 29 (2020), 1575–1590. https://doi.org/10.1109/tip.2019.2941267 doi: 10.1109/tip.2019.2941267
    [56] N. Wang, Z. Y. Ma, Y. C. Tang, Y. Liu, Y. Li, J. W. Niu, An optimized scheme of mel frequency cepstral coefficient for multi-sensor sign language recognition, In: Smart Computing and Communication, Cham: Springer, 2017, 224–235. https://doi.org/10.1007/978-3-319-52015-5_23
    [57] T.-W. Chong, B.-G. Lee, American sign language recognition using leap motion controller with machine learning approach, Sensors, 18 (2018), 3554. https://doi.org/10.3390/s18103554 doi: 10.3390/s18103554
    [58] J. Q. Wang, T. Zhang, An ARM-based embedded gesture recognition system using a data glove, The 26th Chinese Control and Decision Conference (2014 CCDC), Changsha, China, 2014, 1580–1584. https://doi.org/10.1109/ccdc.2014.6852419
    [59] A. Z. Shukor, M. F. Miskon, M. H. Jamaluddin, F. B. Ali, M. F. Asyraf, M. B. B. Bahar, A new data glove approach for Malaysian sign language detection, Procedia Computer Science, 76 (2015), 60–67. https://doi.org/10.1016/j.procs.2015.12.276 doi: 10.1016/j.procs.2015.12.276
    [60] N. B. Ibrahim, M. M. Selim, H. H. Zayed, An automatic Arabic sign language recognition system (ArSLRS), J. King Saud Univ.-Com., 30 (2018), 470–477. https://doi.org/10.1016/j.jksuci.2017.09.007 doi: 10.1016/j.jksuci.2017.09.007
    [61] L. Pigou, S. Dieleman, P.-J. Kindermans, B. Schrauwen, Sign language recognition using convolutional neural networks, In: Computer Vision-ECCV 2014 Workshops, Cham: Springer, 2015, 572–578. https://doi.org/10.1007/978-3-319-16178-5_40
    [62] S. G. M. Almeida, F. G. Guimarães, J. A. Ramírez, Feature extraction in Brazilian Sign Language Recognition based on phonological structure and using RGB-D sensors, Expert Syst. Appl., 41 (2014), 7259–7271. https://doi.org/10.1016/j.eswa.2014.05.024 doi: 10.1016/j.eswa.2014.05.024
    [63] B. Hisham, A. Hamouda, Arabic sign language recognition using Ada-Boosting based on a leap motion controller, Int. J. Inf. Technol., 13 (2021), 1221–1234. https://doi.org/10.1007/s41870-020-00518-5 doi: 10.1007/s41870-020-00518-5
    [64] U. Farooq, M. S. M. Rahim, N. Sabir, A. Hussain, A. Abid, Advances in machine translation for sign language: approaches, limitations, and challenges, Neural Comput. & Applic., 33 (2021), 14357–14399. https://doi.org/10.1007/s00521-021-06079-3 doi: 10.1007/s00521-021-06079-3
    [65] M. I. Sadek, M. N. Mikhael, H. A. Mansour, A new approach for designing a smart glove for Arabic Sign Language Recognition system based on the statistical analysis of the Sign Language, 2017 34th National Radio Science Conference (NRSC), Alexandria, Egypt, 2017, 380–388. https://doi.org/10.1109/NRSC.2017.7893499
    [66] N. Rossol, I. Cheng, A. Basu, A multisensor technique for gesture recognition through intelligent skeletal pose analysis, IEEE T. Hum.-Mach. Syst., 46 (2016), 350–359. https://doi.org/10.1109/thms.2015.2467212 doi: 10.1109/thms.2015.2467212
    [67] L. Quesada, G. López, L. Guerrero, Automatic recognition of the American sign language fingerspelling alphabet to assist people living with speech or hearing impairments, J. Ambient Intell. Human. Comput., 8 (2017), 625–635. https://doi.org/10.1007/s12652-017-0475-7 doi: 10.1007/s12652-017-0475-7
    [68] S.-Z. Li, B. Yu, W. Wu, S.-Z. Su, R.-R. Ji, Feature learning based on SAE–PCA network for human gesture recognition in RGBD images, Neurocomputing, 151 (2015), 565–573. https://doi.org/10.1016/j.neucom.2014.10.086 doi: 10.1016/j.neucom.2014.10.086
    [69] O. Koller, H. Ney, R. Bowden, Deep learning of mouth shapes for sign language, 2015 IEEE International Conference on Computer Vision Workshop (ICCVW), Santiago, Chile, 2015, 477–483. https://doi.org/10.1109/iccvw.2015.69
    [70] S. Kim, Y. Ban, S. Lee, Tracking and classification of in-air hand gesture based on thermal guided joint filter, Sensors, 17 (2017), 166. https://doi.org/10.3390/s17010166 doi: 10.3390/s17010166
    [71] T. X. Xu, D. An, Z. H. Wang, S. C. Jiang, C. N. Meng, Y. W. Zhang, et al., 3D joints estimation of the human body in single-frame point cloud, IEEE Access, 8 (2020), 178900–178908. https://doi.org/10.1109/access.2020.3027892 doi: 10.1109/access.2020.3027892
    [72] Y. M. Zhou, G. L. Jiang, Y. R. Lin, A novel finger and hand pose estimation technique for real-time hand gesture recognition, Pattern Recogn., 49 (2016), 102–114. https://doi.org/10.1016/j.patcog.2015.07.014 doi: 10.1016/j.patcog.2015.07.014
    [73] M. A. Hossen, A. Govindaiah, S. Sultana, A. Bhuiyan, Bengali sign language recognition using deep convolutional neural network, 2018 Joint 7th International Conference on Informatics, Electronics & Vision (ICIEV) and 2018 2nd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), Kitakyushu, Japan, 2018, 369–373. https://doi.org/10.1109/iciev.2018.8640962
    [74] R. Rastgoo, K. Kiani, S. Escalera, M. Sabokrou, Sign language production: A review, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 2021, 3446–3456. https://doi.org/10.1109/cvprw53098.2021.00384
    [75] A. Kratimenos, G. Pavlakos, P. Maragos, Independent sign language recognition with 3d body, hands, and face reconstruction, ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 2021, 4270–4274. https://doi.org/10.1109/icassp39728.2021.9414278
    [76] D. Bansal, P. Ravi, M. So, P. Agrawal, I. Chadha, G. Murugappan, et al., Copycat: Using sign language recognition to help deaf children acquire language skills, Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 2021, 1–10. https://doi.org/10.1145/3411763.3451523
    [77] K. Gajurel, C. C. Zhong, G. H. Wang, A fine-grained visual attention approach for fingerspelling recognition in the wild, 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Melbourne, Australia, 2021, 3266–3271. https://doi.org/10.1109/smc52423.2021.9658982
    [78] M. De Coster, M. V. Herreweghe, J. Dambre, Sign language recognition with transformer networks, In: Proceedings of the Twelfth Language Resources and Evaluation Conference, Paris: European Language Resources Association, 2020, 6018–6024. https://aclanthology.org/2020.lrec-1.737/
    [79] L. Meng, R. H. Li, An attention-enhanced multi-scale and dual sign language recognition network based on a graph convolution network, Sensors, 21 (2021), 1120. https://doi.org/10.3390/s21041120 doi: 10.3390/s21041120
    [80] P. P. Roy, P. Kumar, B.-G. Kim, An efficient sign language recognition (SLR) system using camshift tracker and hidden markov model (HMM), SN Comput. Sci., 2 (2021), 79. https://doi.org/10.1007/s42979-021-00485-z doi: 10.1007/s42979-021-00485-z
    [81] N. Adaloglou, T. Chatzis, I. Papastratis, A. Stergioulas, G. T. Papadopoulos, V. Zacharopoulou, et al., A comprehensive study on deep learning-based methods for sign language recognition, IEEE T. Multimedia, 24 (2022), 1750–1762. https://doi.org/10.1109/tmm.2021.3070438 doi: 10.1109/tmm.2021.3070438
    [82] B. Saunders, N. C. Camgoz, R. Bowden, Continuous 3D multi-channel sign language production via progressive transformers and mixture density networks, Int. J. Comput. Vis., 129 (2021), 2113–2135. https://doi.org/10.1007/s11263-021-01457-9 doi: 10.1007/s11263-021-01457-9
    [83] B. Sara, R. Akmeliawati, A. A. Shafie, M. J. El Salami, Modeling of human upper body for sign language recognition, The 5th International Conference on Automation, Robotics and Applications, Wellington, New Zealand, 2011, 104–108. https://doi.org/10.1109/icara.2011.6144865
    [84] M. Kuhn, K. Johnson, Applied predictive modeling, New York: Springer, 2013. https://doi.org/10.1007/978-1-4614-6849-3
    [85] M. Kuhn, K. Johnson, Feature engineering and selection: A practical approach for predictive models, London: Chapman and Hall/CRC, 2019.
    [86] A. J. Ferreira, M. A. Figueiredo, Efficient feature selection filters for high-dimensional data, Pattern Recogn. Lett., 33 (2012), 1794–1804. https://doi.org/10.1016/j.patrec.2012.05.019 doi: 10.1016/j.patrec.2012.05.019
    [87] K. Yin, A. Moryossef, J. Hochgesang, Y. Goldberg, M. Alikhani, Including signed languages in natural language processing, In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Association for Computational Linguistics, 2021, 7347–7360. https://doi.org/10.18653/v1/2021.acl-long.570
    [88] P. Escudeiro, N. Escudeiro, R. Reis, J. Lopes, M. Norberto, A. B. Baltasar, et al., Virtual sign–a real time bidirectional translator of portuguese sign language, Procedia Computer Science, 67 (2015), 252–262. https://doi.org/10.1016/j.procs.2015.09.269 doi: 10.1016/j.procs.2015.09.269
    [89] J. Huang, W. G. Zhou, H. Q. Li, W. P. Li, Attention-based 3D-CNNs for large-vocabulary sign language recognition, IEEE T. Circ. Syst. Vid., 29 (2019), 2822–2832. https://doi.org/10.1109/TCSVT.2018.2870740 doi: 10.1109/TCSVT.2018.2870740
    [90] K. Papadimitriou, G. Potamianos, Fingerspelled alphabet sign recognition in upper-body videos, 2019 27th European Signal Processing Conference (EUSIPCO), A Coruna, Spain, 2019, 1–5. https://doi.org/10.23919/eusipco.2019.8902541
    [91] M. Ma, X. D. Xu, J. Wu, M. Guo, Design and analyze the structure based on deep belief network for gesture recognition, 2018 Tenth international conference on advanced computational intelligence (ICACI), Xiamen, China, 2018, 40–44. https://doi.org/10.1109/icaci.2018.8377544
    [92] S. M. Kamal, Y. D. Chen, S. Z. Li, X. D. Shi, J. B. Zheng, Technical approaches to Chinese sign language processing: A review, IEEE Access, 7 (2019), 96926–96935. https://doi.org/10.1109/access.2019.2929174 doi: 10.1109/access.2019.2929174
    [93] R.-H. Liang, M. Ouhyoung, A sign language recognition system using hidden markov model and context sensitive search, In: Proceedings of the ACM Symposium on Virtual Reality Software and Technology, New York: Association for Computing Machinery, 1996, 59–66. https://doi.org/10.1145/3304181.3304194
    [94] K. Grobel, M. Assan, Isolated sign language recognition using hidden Markov models, 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation, Orlando, FL, USA, 1997, 162–167. https://doi.org/10.1109/icsmc.1997.625742
    [95] Z. Ghahramani, M. Jordan, Factorial hidden Markov models, In: Advances in Neural Information Processing Systems, 8 (1995).
    [96] L. Rimella, N. Whiteley, Hidden Markov neural networks, 2020, arXiv: 2004.06963v3. https://doi.org/10.48550/arXiv.2004.06963
    [97] A. D. Wilson, A. F. Bobick, Parametric hidden markov models for gesture recognition, IEEE T. Pattern Anal., 21 (1999), 884–900. https://doi.org/10.1109/34.790429 doi: 10.1109/34.790429
    [98] M. Brand, N. Oliver, A. Pentland, Coupled hidden markov models for complex action recognition, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 1997, 994–999. https://doi.org/10.1109/CVPR.1997.609450
    [99] E.-J. Ong, O. Koller, N. Pugeault, R. Bowden, Sign spotting using hierarchical sequential patterns with temporal intervals, 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, Columbus, OH, USA, 2014, 1931–1938. https://doi.org/10.1109/CVPR.2014.248
    [100] Y. Bengio, P. Frasconi, Input-output HMMs for sequence processing, IEEE T. Neural Networ., 7 (1996), 1231–1249. https://doi.org/10.1109/72.536317 doi: 10.1109/72.536317
    [101] A. Just, O. Bernier, S. Marcel, HMM and IOHMM for the recognition of mono-and bi-manual 3D hand gestures, In: Proceedings of the British Machine Vision Conference, BMVA Press, 2004, 1–10. https://doi.org/10.5244/c.18.28
    [102] C. Keskin L. Akarun, STARS: Sign tracking and recognition system using input-output HMMs, Pattern Recogn. Lett., 30 (2009), 1086–1095. https://doi.org/10.1016/j.patrec.2009.03.016
    [103] N. J. Liu, B. C. Lovell, Gesture classification using hidden markov models and viterbi path counting, VⅡth Digital Image Computing: Techniques and Applications, Sydney, Australia, 2003, 273–282.
    [104] M. Elmezain, A. Al-Hamadi, J. Appenrodt, B. Michaelis, A hidden markov model-based continuous gesture recognition system for hand motion trajectory, 2008 19th International Conference on Pattern Recognition, Tampa, FL, USA, 2008, 1–4. https://doi.org/10.1109/icpr.2008.4761080
    [105] J. Appenrodt, A. Al-Hamadi, B. Michaelis, Data gathering for gesture recognition systems based on single color-, stereo color-and thermal cameras, International Journal of Signal Processing, Image Processing and Pattern Recognition, 3 (2010), 37–50. https://doi.org/10.1007/978-3-642-10509-8_10
    [106] M. M. Zaki, S. I. Shaheen, Sign language recognition using a combination of new vision based features, Pattern Recogn. Lett., 32 (2011), 572–577. https://doi.org/10.1016/j.patrec.2010.11.013 doi: 10.1016/j.patrec.2010.11.013
    [107] P. V. Barros, N. T. Júnior, J. M. Bisneto, B. J. Fernandes, B. L. Bezerra, S. M. Fernandes, An effective dynamic gesture recognition system based on the feature vector reduction for SURF and LCS, In: Artificial Neural Networks and Machine Learning–ICANN 2013, Berlin: Springer, 2013, 412–419. https://doi.org/10.1007/978-3-642-40728-4_52
    [108] W. W. Yang, J. X. Tao, Z. F. Ye, Continuous sign language recognition using level building based on fast hidden Markov model, Pattern Recogn. Lett., 78 (2016), 28–35. https://doi.org/10.1016/j.patrec.2016.03.030 doi: 10.1016/j.patrec.2016.03.030
    [109] S. Belgacem, C. Chatelain, T. Paquet, Gesture sequence recognition with one shot learned CRF/HMM hybrid model, Image Vision Comput., 61 (2017), 12–21. https://doi.org/10.1016/j.imavis.2017.02.003 doi: 10.1016/j.imavis.2017.02.003
    [110] B. Y. Fang, J. Co, M. Zhang, Deepasl: Enabling ubiquitous and non-intrusive word and sentence-level sign language translation, In: Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems, New York: Association for Computing Machinery, 2017, 1–13. https://doi.org/10.1145/3131672.3131693
    [111] D. C. Kavarthapu, K. Mitra, Hand gesture sequence recognition using inertial motion units (IMUs), 2017 4th IAPR Asian Conference on Pattern Recognition (ACPR), Nanjing, China, 2017, 953–957. https://doi.org/10.1109/acpr.2017.159
    [112] E. Rakun, A. M. Arymurthy, L. Y. Stefanus, A. F. Wicaksono, I. W. W. Wisesa, Recognition of sign language system for Indonesian language using long short-term memory neural networks, Adv. Sci. Lett., 24 (2018), 999–1004. https://doi.org/10.1166/asl.2018.10675 doi: 10.1166/asl.2018.10675
    [113] S. S. Kumar, T. Wangyal, V. Saboo, R. Srinath, Time series neural networks for real time sign language translation, 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), Orlando, FL, USA, 2018, 243–248. https://doi.org/10.1109/ICMLA.2018.00043
    [114] D. M. Adimas, E. Rakun, D. Hardianto, Recognizing indonesian sign language gestures using features generated by elliptical model tracking and angular projection, 2019 2nd International Conference on Intelligent Autonomous Systems (ICoIAS), Singapore, 2019, 25–31. https://doi.org/10.1109/icoias.2019.00011
    [115] S. Kumar, R. Rani, S. K. Pippal, U. Chaudhari, Real time Indian sign language recognition using transfer learning with VGG16, TELKOMNIKA (Telecommunication Computing Electronics and Control), 22 (2024), 1459–1468. https://doi.org/10.12928/telkomnika.v22i6.26498 doi: 10.12928/telkomnika.v22i6.26498
    [116] S. Escalera, X. Baró, J. Gonzalez, M. A. Bautista, M. Madadi, M. Reyes, et al., Chalearn looking at people challenge 2014: Dataset and results, Computer Vision-ECCV 2014 Workshops, Cham: Springer, 2015, 459–473. https://doi.org/10.1007/978-3-319-16178-5_32
    [117] J. Joy, K. Balakrishnan, M. Sreeraj, SignQuiz: A quiz based tool for learning fingerspelled signs in indian sign language using ASLR, IEEE Access, 7 (2019), 28363–28371. https://doi.org/10.1109/ACCESS.2019.2901863 doi: 10.1109/ACCESS.2019.2901863
    [118] K. Bantupalli, Y. Xie, American sign language recognition using deep learning and computer vision, 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 2018, 4896–4899. https://doi.org/10.1109/BigData.2018.8622141
    [119] H. A. AbdElghfar, A. M. Ahmed, A. A. Alani, H. M. AbdElaal, B. Bouallegue, M. M. Khattab, et al., A model for qur'anic sign language recognition based on deep learning algorithms, J. Sensors, 2023 (2023), 9926245. https://doi.org/10.1155/2023/9926245 doi: 10.1155/2023/9926245
    [120] S. Aly, W. Aly, DeepArSLR: A novel signer-independent deep learning framework for isolated arabic sign language gestures recognition, IEEE Access, 8 (2020), 83199–83212. https://doi.org/10.1109/ACCESS.2020.2990699 doi: 10.1109/ACCESS.2020.2990699
    [121] L.-Ch. Chen, G. Papandreou, F. Schroff, H. Adam, Rethinking atrous convolution for semantic image segmentation, 2017, arXiv: 1706.05587. https://doi.org/10.48550/arXiv.1706.05587
    [122] K. Yin, J. Read, Attention is all you sign: sign language translation with transformers, In: Sign Language Recognition, Translation and Production (SLRTP) Workshop-Extended Abstracts, 4 (2020).
    [123] N. C. Camgoz, O. Koller, S. Hadfield, R. Bowden, Sign language transformers: Joint end-to-end sign language recognition and translation, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2020, 10020–10030. https://doi.org/10.1109/CVPR42600.2020.01004
    [124] K. Yin, J. Read, Better sign language translation with STMC-transformer, 2020, arXiv: 2004.00588. https://doi.org/10.48550/arXiv.2004.00588
    [125] M. De Coster, M. V. Herreweghe, J. Dambre, Isolated sign recognition from rgb video using pose flow and self-attention, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 2021, 3436–3445. https://doi.org/10.1109/cvprw53098.2021.00383
    [126] I. Papastratis, K. Dimitropoulos, P. Daras, Continuous sign language recognition through a context-aware generative adversarial network, Sensors, 21 (2021), 2437. https://doi.org/10.3390/s21072437 doi: 10.3390/s21072437
    [127] T. Jiang, N. C. Camgoz, R. Bowden, Skeletor: Skeletal transformers for robust body-pose estimation, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 2021, 3389–3397. https://doi.org/10.1109/cvprw53098.2021.00378
    [128] N. C. Camgöz, B. Saunders, G. Rochette, M. Giovanelli, G. Inches, R. Nachtrab-Ribback, et al., Content4all open research sign language translation datasets, 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), Jodhpur, India, 2021, 1–5. https://doi.org/10.1109/fg52635.2021.9667087
    [129] A. Moryossef, I. Tsochantaridis, J. Dinn, N. C. Camgoz, R. Bowden, T. Jiang, et al., Evaluating the immediate applicability of pose estimation for sign language recognition, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 2021, 3429–3435. https://doi.org/10.1109/cvprw53098.2021.00382
    [130] D. M. Gavrila, The visual analysis of human movement: A survey, Comput. Vis. Image Und., 73 (1999), 82–98. https://doi.org/10.1006/cviu.1998.0716 doi: 10.1006/cviu.1998.0716
    [131] H. L. Ribeiro, A. Gonzaga, Hand image segmentation in video sequence by GMM: A comparative analysis, 2006 19th Brazilian Symposium on Computer Graphics and Image Processing, Amazonas, Brazil, 2006, 357–364. https://doi.org/10.1109/sibgrapi.2006.23
    [132] T. B. Moeslund, E. Granum, A survey of computer vision-based human motion capture, Comput. Vis. Image Und., 81 (2001), 231–268. https://doi.org/10.1006/cviu.2000.0897 doi: 10.1006/cviu.2000.0897
    [133] S. S. Rautaray, A. Agrawal, Vision based hand gesture recognition for human computer interaction: A survey, Artif. Intell. Rev., 43 (2015), 1–54. https://doi.org/10.1007/s10462-012-9356-9 doi: 10.1007/s10462-012-9356-9
    [134] M. Mohandes, M. Deriche, J. Liu, Image-based and sensor-based approaches to Arabic sign language recognition, IEEE T. Hum.-Mach. Syst., 44 (2014), 551–557. https://doi.org/10.1109/THMS.2014.2318280 doi: 10.1109/THMS.2014.2318280
    [135] G. Kumar, P. K. Bhatia, A detailed review of feature extraction in image processing systems, 2014 Fourth International Conference on Advanced Computing & Communication Technologies, Rohtak, India, 2014, 5–12. https://doi.org/10.1109/acct.2014.74
    [136] S. Y. Jiang, B. Sun, L. C. Wang, Y. Bai, K. P. Li, Y. Fu, Skeleton aware multi-modal sign language recognition, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 2021, 3408–3418. https://doi.org/10.1109/cvprw53098.2021.00380
    [137] H. Zhou, W. G. Zhou, Y. Zhou, H. Q. Li, Spatial-temporal multi-cue network for continuous sign language recognition, Proceedings of the AAAI conference on artificial intelligence, 34 (2020), 13009–13016. https://doi.org/10.1609/aaai.v34i07.7001 doi: 10.1609/aaai.v34i07.7001
    [138] R. Rastgoo, K. Kiani, S. Escalera, Sign language recognition: A deep survey, Expert Syst. Appl., 164 (2021), 113794. https://doi.org/10.1016/j.eswa.2020.113794 doi: 10.1016/j.eswa.2020.113794
    [139] A. Moryossef, I. Tsochantaridis, R. Aharoni, S. Ebling, S. Narayanan, Real-time sign language detection using human pose estimation, In: Computer Vision–ECCV 2020 Workshops, Cham: Springer, 2020, 237–248. https://doi.org/10.1007/978-3-030-66096-3_17
    [140] R. Verma, S. Mittal, S. Pawar, M. Sharma, S. Goel, V. H. C. de Albuquerque, Automatic rigging of 3D models with stacked hourglass networks and descriptors, AIP Conf. Proc., 2919 (2024), 050006. https://doi.org/10.1063/5.0184393
    [141] S.-E. Wei, V. Ramakrishna, T. Kanade, Y. Sheikh, Convolutional pose machines, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, 4724–4732. https://doi.org/10.1109/CVPR.2016.511
    [142] A. Toshev, C. Szegedy, DeepPose: Human pose estimation via deep neural networks, 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, 1653–1660. https://doi.org/10.1109/CVPR.2014.214
    [143] S. Gattupalli, A. Ghaderi, V. Athitsos, Evaluation of deep learning based pose estimation for sign language recognition, In: Proceedings of the 9th ACM International Conference on PErvasive Technologies Related to Assistive Environments, New York: Association for Computing Machinery, 2016, 1–7. http://dx.doi.org/10.1145/2910674.2910716
    [144] S.-K. Ko, C. J. Kim, H. Jung, C. Cho, Neural sign language translation based on human keypoint estimation, Appl. Sci., 9 (2019), 2683. https://doi.org/10.3390/app9132683 doi: 10.3390/app9132683
    [145] M. Madadi, H. Bertiche, S. Escalera, SMPLR: Deep learning based SMPL reverse for 3D human pose and shape recovery, Pattern Recogn., 106 (2020), 107472. https://doi.org/10.1016/j.patcog.2020.107472 doi: 10.1016/j.patcog.2020.107472
    [146] A. Jain, J. Tompson, Y. LeCun, C. Bregler, Modeep: A deep learning framework using motion features for human pose estimation, In: Computer Vision--ACCV 2014, Cham: Springer, 2015, 302–315. https://doi.org/10.1007/978-3-319-16808-1_21
    [147] X. J. Chen, A. L. Yuille, Articulated pose estimation by a graphical model with image dependent pairwise relations, 2014, arXiv: 1407.3399. https://doi.org/10.48550/arXiv.1407.3399
    [148] J. Charles, T. Pfister, D. Magee, D. Hogg, A. Zisserman, Personalizing human video pose estimation, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, 3063–3072. https://doi.org/10.1109/CVPR.2016.334
    [149] Y. R. Bin, Z.-M. Chen, X.-S. Wei, X. Y. Chen, C. X. Gao, N. Sang, Structure-aware human pose estimation with graph convolutional networks, Pattern Recogn., 106 (2020), 107410. https://doi.org/10.1016/j.patcog.2020.107410 doi: 10.1016/j.patcog.2020.107410
    [150] A. Haque, B. Peng, Z. L. Luo, A. Alahi, S. Yeung, F.-F. Li, Towards viewpoint invariant 3d human pose estimation, Computer Vision–ECCV 2016, 14 (2016), 160–177. https://doi.org/10.1007/978-3-319-46448-0_10 doi: 10.1007/978-3-319-46448-0_10
    [151] M. Wang, X. P. Chen, W. T. Liu, C. Qian, L. Lin, L. Z. Ma, Drpose3D: Depth ranking in 3D human pose estimation, Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 2018, 978–984. https://doi.org/10.24963/ijcai.2018/136 doi: 10.24963/ijcai.2018/136
    [152] M. J. Marin-Jimenez, F. J. Romero-Ramirez, R. Munoz-Salinas, R. Medina-Carnicer, 3D human pose estimation from depth maps using a deep combination of poses, J. Vis. Commun. Image R., 55 (2018), 627–639. https://doi.org/10.1016/j.jvcir.2018.07.010
    [153] Q. Dang, J. Q. Yin, B. Wang, W. Q. Zheng, Deep learning based 2D human pose estimation: A survey, Tsinghua Sci. Technol., 24 (2019), 663–676. https://doi.org/10.26599/TST.2018.9010100 doi: 10.26599/TST.2018.9010100
    [154] X. P. Ji, Q. Fang, J. T. Dong, Q. Shuai, W. Jiang, X. W. Zhou, A survey on monocular 3D human pose estimation, Virtual Reality & Intelligent Hardware, 2 (2020), 471–500. https://doi.org/10.1016/j.vrih.2020.04.005 doi: 10.1016/j.vrih.2020.04.005
    [155] L. Pigou, M. V. Herreweghe, J. Dambre, Gesture and sign language recognition with temporal residual networks, 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 2017, 3086–3093. https://doi.org/10.1109/iccvw.2017.365
    [156] R. P. Cui, H. Liu, C. S. Zhang, Recurrent convolutional neural networks for continuous sign language recognition by staged optimization, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, 1610–1618. https://doi.org/10.1109/CVPR.2017.175
    [157] G. Latif, N. Mohammad, J. Alghazo, R. AlKhalaf, R. AlKhalaf, ArASL: Arabic alphabets sign language dataset, Data Brief, 23 (2019), 103777. https://doi.org/10.1016/j.dib.2019.103777 doi: 10.1016/j.dib.2019.103777
    [158] A. I. Shahin, S. Almotairi, Automated Arabic sign language recognition system based on deep transfer learning, Int. J. Comput. Sci. Net., 19 (2019), 144–152.
    [159] Suharjito, H. Gunawan, N. Thiracitta, A. Nugroho, Sign language recognition using modified convolutional neural network model, 2018 Indonesian Association for Pattern Recognition International Conference (INAPR), Jakarta, Indonesia, 2018, 1–5. https://doi.org/10.1109/inapr.2018.8627014
    [160] B. Mocialov, G. Turner, K. Lohan, H. Hastie, Towards continuous sign language recognition with deep learning, Proceedings of the workshop on the creating meaning with robot assistants, 2017, 5525834.
    [161] V. Belissen, Sign language video analysis for automatic recognition and detection, 14th IEEE International Conference on Automatic Face and Gesture Recognition, Lille, France, 2019, 1–5.
    [162] A. Sabyrov, M. Mukushev, V. Kimmelman, Towards real-time sign language interpreting robot: evaluation of non-manual components on recognition accuracy, CVPR Workshops, 2019, 75–82.
    [163] A. T. Magar, P. Parajuli, American sign language recognition using convolution neural network, Undergraduate Research Conference (URC 2020), 2020, 65–80.
    [164] Q. F. Xue, X. P. Li, D. Wang, W. G. Zhang, Deep forest-based monocular visual sign language recognition, Appl. Sci., 9 (2019), 1945. https://doi.org/10.3390/app9091945 doi: 10.3390/app9091945
    [165] K. M. Lim, A. W. Tan, C. P. Lee, S. C. Tan, Isolated sign language recognition using convolutional neural network hand modelling and hand energy image, Multimed. Tools Appl., 78 (2019), 19917–19944. https://doi.org/10.1007/s11042-019-7263-7 doi: 10.1007/s11042-019-7263-7
    [166] M. E. M. Cayamcela, W. S. Lim, Fine-tuning a pre-trained convolutional neural network model to translate American sign language in real-time, 2019 International Conference on Computing, Networking and Communications (ICNC), Honolulu, HI, USA, 2019, 100–104. https://doi.org/10.1109/ICCNC.2019.8685536
    [167] M. Taskiran, M. Killioglu, N. Kahraman, A real-time system for recognition of American sign language by using deep learning, 2018 41st International Conference on Telecommunications and Signal Processing (TSP), Athens, Greece, 2018, 1–5. https://doi.org/10.1109/TSP.2018.8441304
    [168] R. Daroya, D. Peralta, P. Naval, Alphabet sign language image classification using deep learning, TENCON 2018-2018 IEEE Region 10 Conference, Jeju, Korea (South), 2018, 0646–0650. https://doi.org/10.1109/tencon.2018.8650241
    [169] M. A. Jalal, R. L. Chen, R. K. Moore, L. Mihaylova, American sign language posture understanding with deep neural networks, 2018 21st International Conference on Information Fusion (FUSION), Cambridge, UK, 2018, 573–579. https://doi.org/10.23919/icif.2018.8455725
    [170] T. Aujeszky, M. Eid, A gesture recogintion architecture for Arabic sign language communication system, Multimed. Tools Appl., 75 (2016), 8493–8511. https://doi.org/10.1007/s11042-015-2767-2 doi: 10.1007/s11042-015-2767-2
    [171] M. Elpeltagy, M. Abdelwahab, M. E. Hussein, A. Shoukry, A. Shoala, M. Galal, Multi-modality-based Arabic sign language recognition, IET Comput. Vis., 12 (2018), 1031–1039. https://doi.org/10.1049/iet-cvi.2017.0598 doi: 10.1049/iet-cvi.2017.0598
    [172] S. Islam, S. S. S. Mousumi, A. S. A. Rabby, S. A. Hossain, S. Abujar, A potent model to recognize bangla sign language digits using convolutional neural network, Procedia Computer Science, 143 (2018), 611–618. https://doi.org/10.1016/j.procs.2018.10.438 doi: 10.1016/j.procs.2018.10.438
    [173] C. S. Mao, S. L. Huang, X. X. Li, Z. F. Ye, Chinese sign language recognition with sequence to sequence learning, Communications in Computer and Information Science, (2017), 180–191. https://doi.org/10.1007/978-981-10-7299-4_15
    [174] T. D. Sajanraj, M. Beena, Indian sign language numeral recognition using region of interest convolutional neural network, 2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT), Coimbatore, India, 2018, 636–640. https://doi.org/10.1109/icicct.2018.8473141
    [175] N. Soodtoetong, E. Gedkhaw, The efficiency of sign language recognition using 3D convolutional neural networks, 2018 15th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), Chiang Rai, Thailand, 2018, 70–73. https://doi.org/10.1109/ECTICon.2018.8619984
    [176] P. Nakjai, P. Maneerat, T. Katanyukul, Thai finger spelling localization and classification under complex background using a YOLO-based deep learning, In: Proceedings of the 11th International Conference on Computer Modeling and Simulation, New York: Association for Computing Machinery, 2019, 230–233. https://doi.org/10.1145/3307363.3307403
    [177] P. F. Sun, F. Chen, G. J. Wang, J. S. Ren, J. W. Dong, A robust static sign language recognition system based on hand key points estimation, Intelligent Systems Design and Applications: 17th International Conference, (2018), 548–557. https://doi.org/10.1007/978-3-319-76348-4_53
    [178] R. Alzohairi, R. Alghonaim, W. Alshehri, S. Aloqeely, Image based Arabic sign language recognition system, Int. J. Adv. Comput. Sc., 9 (2018), 090327. https://doi.org/10.14569/ijacsa.2018.090327 doi: 10.14569/ijacsa.2018.090327
    [179] R. Rastgoo, K. Kiani, S. Escalera, Multi-modal deep hand sign language recognition in still images using restricted Boltzmann machine, Entropy, 20 (2018), 809. https://doi.org/10.3390/e20110809 doi: 10.3390/e20110809
    [180] G. Devineau, F. Moutarde, W. Xi, J. Yang, Deep learning for hand gesture recognition on skeletal data, 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition, Xi'an, China, 2018, 106–113. https://doi.org/10.1109/FG.2018.00025
    [181] R. Alzohairi, R. Alghonaim, W. Alshehri, S. Aloqeely, Image based Arabic sign language recognition system, Int. J. Adv. Comput. Sc., 9 (2018), 090327. https://doi.org/10.14569/IJACSA.2018.090327 doi: 10.14569/IJACSA.2018.090327
    [182] G. A. Rao, K. Syamala, P. V. V. Kishore, A. S. C. S. Sastry, Deep convolutional neural networks for sign language recognition, 2018 Conference on Signal Processing And Communication Engineering Systems (SPACES), Vijayawada, India, 2018, 194–197. https://doi.org/10.1109/SPACES.2018.8316344
    [183] S. Shahriar, A. Siddiquee, T. Islam, A. Ghosh, R. Chakraborty, A. I. Khan, et al., Real-time american sign language recognition using skin segmentation and image category classification with convolutional neural network and deep learning, TENCON 2018-2018 IEEE Region 10 Conference, Jeju, Korea (South), 2018, 1168–1171. https://doi.org/10.1109/tencon.2018.8650524
  • Reader Comments
  • © 2026 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(122) PDF downloads(11) Cited by(0)

Article outline

Figures and Tables

Figures(7)  /  Tables(9)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog