Hostname: page-component-76fb5796d-vfjqv Total loading time: 0 Render date: 2024-04-30T04:22:26.211Z Has data issue: false hasContentIssue false

Time–frequency feature transform suite for deep learning-based gesture recognition using sEMG signals

Published online by Cambridge University Press:  04 November 2022

Xin Zhou
Affiliation:
Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen 518055, China University of Science and Technology of China, Hefei, Anhui 230026, China
Jiancong Ye
Affiliation:
Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen 518055, China Shien-Ming Wu School of Intelligent Engineering, South China University of Technology, Guangzhou 511442, China.
Can Wang*
Affiliation:
Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen 518055, China
Junpei Zhong
Affiliation:
The Hong Kong Polytechnic University, Hong Kong, China
Xinyu Wu
Affiliation:
Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen 518055, China
*
*Corresponding author. E-mail: can.wang@siat.ac.cn

Abstract

Recently, deep learning methods have achieved considerable performance in gesture recognition using surface electromyography signals. However, improving the recognition accuracy in multi-subject gesture recognition remains a challenging problem. In this study, we aimed to improve recognition performance by adding subject-specific prior knowledge to provide guidance for multi-subject gesture recognition. We proposed a time–frequency feature transform suite (TFFT) that takes the maps generated by continuous wavelet transform (CWT) as input. The TFFT can be connected to a neural network to obtain an end-to-end architecture. Thus, we integrated the suite into traditional neural networks, such as convolutional neural networks and long short-term memory, to adjust the intermediate features. The results of comparative experiments showed that the deep learning models with the TFFT suite based on CWT improved the recognition performance of the original architectures without the TFFT suite in gesture recognition tasks. Our proposed TFFT suite has promising applications in multi-subject gesture recognition and prosthetic control.

Type
Research Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

These authors contributed equally to this work and should be considered co-first authors.

References

Balmik, A., Paikaray, A., Jha, M. and Nandy, A., “Motion recognition using deep convolutional neural network for Kinect-based NAO teleoperation,” Robotica 40(9), 121 (2022).CrossRefGoogle Scholar
Tao, Y., Huang, Y., Zheng, J., Chen, J., Zhang, Z., Guo, Y. and Li, P., “Multi-channel sEMG Based Human Lower Limb Motion Intention Recognition Method,” In: 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM) (IEEE, 2019) pp. 10371042.CrossRefGoogle Scholar
Yuan, Y., Guo, Z., Wang, C., Duan, S., Zhang, L. and Wu, X., “Gait Phase Classification Based on SEMG Signals Using Long Short-Term Memory for Lower Limb Exoskeleton Robot,” In: IOP Conference Series: Materials Science and Engineering. vol. 853 (IOP Publishing, 2020) pp. 012041.Google Scholar
Gautam, A., Panwar, M., Biswas, D. and Acharyya, A., “Myonet: A transfer-learning-based lrcn for lower limb movement recognition and knee joint angle prediction for remote monitoring of rehabilitation progress from sEMG,” IEEE J. Transl. Eng. Health Med. 8, 110 (2020).Google ScholarPubMed
Atzori, M. and Müller, H., “The Ninapro Database: A Resource for SEMG Naturally Controlled Robotic Hand Prosthetics,” In: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (IEEE, 2015) pp. 71517154.CrossRefGoogle Scholar
Tsagkas, N., Tsinganos, P. and Skodras, A., “On the Use of Deeper CNNs in Hand Gesture Recognition Based on sEMG Signals,” In: 2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA) (IEEE, 2019) pp. 14.CrossRefGoogle Scholar
Hu, Y., Wong, Y., Wei, W., Du, Y., Kankanhalli, M. and Geng, W., “A novel attention-based hybrid CNN-RNN architecture for sEMG-based gesture recognition,” PloS One 13(10), e0206049 (2018).CrossRefGoogle ScholarPubMed
Ding, Z., Yang, C., Tian, Z., Yi, C., Fu, Y. and Jiang, F., “sEMG-based gesture recognition with convolution neural networks,” Sustainability 10(6), 1865 (2018).CrossRefGoogle Scholar
Bittibssi, T. M., Zekry, A. H., Genedy, M. A., Maged, S. A., “sEMG pattern recognition based on recurrent neural network,” Biomed. Sig. Proces. 70, 103048 (2021).CrossRefGoogle Scholar
Liu, X., Khan, K. N., Farooq, Q., Hao, Y. and Arshad, M. S., “Obstacle avoidance through gesture recognition: Business advancement potential in robot navigation socio-technology,” Robotica 37(10), 16631676 (2019).CrossRefGoogle Scholar
Orabona, F., Castellini, C., Caputo, B., Fiorilla, A. E. and Sandini, G., “Model Adaptation with Least-Squares SVM for Adaptive Hand Prosthetics,” In: 2009 IEEE International Conference on Robotics and Automation (IEEE, 2009) pp. 28972903.CrossRefGoogle Scholar
Ren, W., Pan, J., Cao, X. and Yang, M.-H., “Video Deblurring Via Semantic Segmentation and Pixel-Wise Non-Linear Kernel,” In: Proceedings of the IEEE International Conference on Computer Vision (2017) pp. 10771085.Google Scholar
Zhu, S., Urtasun, R., Fidler, S., Lin, D. and Loy, C. C., “Be Your Own Prada: Fashion Synthesis with Structural Coherence,” In: Proceedings of the IEEE International Conference on Computer Vision (2017) pp. 16801688.Google Scholar
Park, K.-H. and Lee, S.-W., “Movement Intention Decoding Based on Deep Learning for Multiuser Myoelectric Interfaces,” In: 2016 4th International Winter Conference on BRAIN-COMPUTER INTERFACE (BCI) (IEEE, 2016) pp. 12.CrossRefGoogle Scholar
Atzori, M., Cognolato, M. and Müller, H., “Deep learning with convolutional neural networks applied to electromyography data: A resource for the classification of movements for prosthetic hands,” Front. Neurorobot. 10, 9 (2016).CrossRefGoogle Scholar
Nasri, N., Orts-Escolano, S., Gomez-Donoso, F. and Cazorla, M., “Inferring static hand poses from a low-cost non-intrusive semg sensor,” Sensors 19(2), 371 (2019).CrossRefGoogle ScholarPubMed
Simão, M., Neto, P. and Gibaru, O., “EMG-based online classification of gestures with recurrent neural networks,” Pattern Recogn. Lett. 128, 4551 (2019).CrossRefGoogle Scholar
Samadani, A., “Gated Recurrent Neural Networks for Emg-Based Hand Gesture Classification. a Comparative Study,” In: 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (IEEE, 2018) pp. 14.CrossRefGoogle Scholar
Shen, S., Gu, K., Chen, X.-R., Yang, M. and Wang, R.-C., “Movements classification of multi-channel sEMG based on cnn and stacking ensemble learning,” IEEE Access 7, 137489137500 (2019).CrossRefGoogle Scholar
Chen, H., Zhang, Y., Li, G., Fang, Y. and Liu, H., “Surface electromyography feature extraction via convolutional neural network,” Int. J. Mach. Learn. Cyb. 11(1), 185196 (2020).CrossRefGoogle Scholar
Matsubara, T. and Morimoto, J., “Bilinear modeling of EMG signals to extract user-independent features for multiuser myoelectric interface,” IEEE Trans. Biomed. Eng. 60(8), 22052213 (2013).CrossRefGoogle ScholarPubMed
Xiong, A., Zhao, X., Han, J., Liu, G. and Ding, Q., “An User-Independent Gesture Recognition Method Based on Semg Decomposition,” In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2015) pp. 41854190.Google Scholar
Khushaba, R. N., “Correlation analysis of electromyogram signals for multiuser myoelectric interfaces,” IEEE Trans. Neur. Syst. Rehab. 22(4), 745755 (2014).CrossRefGoogle ScholarPubMed
Xue, B., Wu, L., Wang, K., Zhang, X., Cheng, J., Chen, X. and Chen, X., “Multiuser gesture recognition using sEMG signals via canonical correlation analysis and optimal transport,” Comput. Biol. Med. 130, 104188 (2021).CrossRefGoogle ScholarPubMed
He, B., Wang, C., Wang, H., Li, M., Duan, S. and Wu, X., “A Method for Recognition of Dynamic Hand Gestures Based on Wrist Tendon Sounds,” In: 2020 IEEE International Conference on E-health Networking, Application & Services (HEALTHCOM) (IEEE, 2021) pp. 16.CrossRefGoogle Scholar
Misiti, M., Wavelet Toolbox for Use with MATLAB: User’s Guide; Version 2; Computation, Visualization, Programming (MathWorks Incorporated, 2000).Google Scholar
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł. and Polosukhin, I., “Attention is all you need,” Adv. Neur. Inform. Process. Syst. 30 (2017).Google Scholar
Bai, S., Kolter, J. Z. and Koltun, V., “An empirical evaluation of generic convolutional and recurrent networks for sequence modeling, arXiv preprint arXiv:1803.01271 (2018).Google Scholar
Atzori, M., Gijsberts, A., Castellini, C., Caputo, B., Hager, A.-G. M., Elsig, S., Giatsidis, G., Bassetto, F. and Müller, H., “Electromyography data for non-invasive naturally-controlled robotic hand prostheses,” Sci. Data 1(1), 113 (2014).CrossRefGoogle ScholarPubMed
Kingma, D. P. and Ba, J., “Adam: a method for stochastic optimization, arXiv preprint arXiv: 1412.6980 (2014).Google Scholar