Skip to main content

MTSAN-MI: Multiscale Temporal-Spatial Convolutional Self-attention Network for Motor Imagery Classification

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1963))

Included in the following conference series:

  • 399 Accesses

Abstract

EEG signals are widely utilized in brain-computer interfaces, where motor imagery (MI) data plays a crucial role. The effective alignment of MI-based EEG signals for feature extraction, decoding, and classification has always been a significant challenge. Decoding methods based on convolution neural networks often encounter the issue of selecting the optimal receptive field, while convolution in the spatial domain cannot fully utilize the rich spatial topological information contained within EEG signals. In this paper, we propose a multiscale temporal-spatial convolutional self-attention network for motor imagery classification (MTSAN-MI). The proposed model starts with a multiscale temporal-spatial convolution module, in which temporal convolutional layers of varying scales across three different branches can extract corresponding features based on their receptive fields respectively, and graph convolution networks are better equipped to leverage the intrinsic relationships between channels. The multi-head self-attention module is directly connected to capture global dependencies within the temporal-spatial features. Evaluation experiments are conducted on two MI-based EEG datasets, which show that the state-of-the-art is achieved on one dataset, and the result is comparable to the best method on the other dataset. The ablation study also proves the importance of each component of the framework.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. He, B., Yuan, H., Meng, J., Gao, S.: Brain–computer interfaces. In: He, B. (ed.) Neural Engineering, pp. 131–183. Springer International Publishing, Cham (2020). https://doi.org/10.1007/978-3-030-43395-6_4

  2. Pfurtscheller, G., Neuper, C.: Motor imagery and direct brain-computer communication. Proc. IEEE 89, 1123–1134 (2001). https://doi.org/10.1109/5.939829

    Article  Google Scholar 

  3. Ding, Y., Robinson, N., Tong, C., Zeng, Q., Guan, C.: LGGNet: learning from local-global-graph representations for brain–computer interface. IEEE Transactions on Neural Networks and Learning Systems, pp. 1–14 (2023). https://doi.org/10.1109/TNNLS.2023.3236635

  4. O’Shea, K., Nash, R.: An Introduction to Convolutional Neural Networks. http://arxiv.org/abs/1511.08458. (2015)

  5. Lawhern, V.J., Solon, A.J., Waytowich, N.R., Gordon, S.M., Hung, C.P., Lance, B.J.: EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces. J. Neural Eng. 15, 056013 (2018). https://doi.org/10.1088/1741-2552/aace8c

    Article  Google Scholar 

  6. Riyad, M., Khalil, M., Adib, A.: MI-EEGNET: a novel convolutional neural network for motor imagery classification. J. Neurosci. Methods 353, 109037 (2021). https://doi.org/10.1016/j.jneumeth.2020.109037

    Article  Google Scholar 

  7. Wang, H., Xu, L., Bezerianos, A., Chen, C., Zhang, Z.: Linking attention-based multiscale CNN With dynamical GCN for driving fatigue detection. IEEE Trans. Instrum. Meas. 70, 1–11 (2021). https://doi.org/10.1109/TIM.2020.3047502

    Article  Google Scholar 

  8. Song, Y., Zheng, Q., Liu, B., Gao, X.: EEG Conformer: convolutional transformer for EEG decoding and visualization. IEEE Trans. Neural Syst. Rehabil. Eng. 31, 710–719 (2023). https://doi.org/10.1109/TNSRE.2022.3230250

    Article  Google Scholar 

  9. Xu, W., Wang, J., Jia, Z., Hong, Z., Li, Y., Lin, Y.: Multi-Level spatial-temporal adaptation network for motor imagery classification. In: ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1251–1255 (2022). https://doi.org/10.1109/ICASSP43922.2022.9746123

  10. Hou, Y., et al.: GCNs-Net: a graph convolutional neural network approach for decoding time-resolved EEG motor imagery signals. IEEE Transactions on Neural Networks and Learning Systems, pp. 1–12 (2022). https://doi.org/10.1109/TNNLS.2022.3202569

  11. Jia, Z., et al.: GraphSleepNet: Adaptive Spatial-Temporal Graph Convolutional Networks for Sleep Stage Classification (2020). https://doi.org/10.24963/ijcai.2020/184

  12. Zhong, P., Wang, D., Miao, C.: EEG-based emotion recognition using regularized graph neural networks. IEEE Trans. Affect. Comput. 13, 1290–1301 (2022). https://doi.org/10.1109/TAFFC.2020.2994159

    Article  Google Scholar 

  13. Sun, B., Liu, Z., Wu, Z., Mu, C., Li, T.: Graph Convolution Neural Network based End-to-end Channel Selection and Classification for Motor Imagery Brain-computer Interfaces. IEEE Transactions on Industrial Informatics, pp. 1–10 (2022). https://doi.org/10.1109/TII.2022.3227736

  14. Vaswani, A., et al.: Attention is All you Need. In: Advances in Neural Information Processing Systems. Curran Associates, Inc. (2017)

    Google Scholar 

  15. Musallam, Y.K., et al.: Electroencephalography-based motor imagery classification using temporal convolutional network fusion. Biomed. Signal Process. Control 69, 102826 (2021). https://doi.org/10.1016/j.bspc.2021.102826

    Article  Google Scholar 

  16. Tangermann, M., et al.: Review of the BCI Competition IV. Front. Neurosci. 6 (2012). https://doi.org/10.3389/fnins.2012.00055

  17. Schirrmeister, R.T., et al.: Deep learning with convolutional neural networks for EEG decoding and visualization. Hum. Brain Mapp. 38, 5391–5420 (2017). https://doi.org/10.1002/hbm.23730

    Article  Google Scholar 

  18. Li, Y., Guo, L., Liu, Y., Liu, J., Meng, F.: A temporal-spectral-based squeeze-and- excitation feature fusion network for motor imagery EEG decoding. IEEE Trans. Neural Syst. Rehabil. Eng. 29, 1534–1545 (2021). https://doi.org/10.1109/TNSRE.2021.3099908

    Article  Google Scholar 

  19. Ang, K.K., Chin, Z.Y., Wang, C., Guan, C., Zhang, H.: Filter bank common spatial pattern algorithm on BCI competition IV datasets 2a and 2b. Front. Neurosci. 6 (2012). https://doi.org/10.3389/fnins.2012.00039

Download references

Acknowledgments

Research supported by the National Key R&D Program of China, grant no. 2021YFC0122700; National Natural Science Foundation of China, grant no. 61904038 and no. U1913216; Shanghai Sailing Program, grant no. 19YF1403600; Shanghai Municipal Science and Technology Commission, grant no. 19441907600; Opening Project of Zhejiang Lab, grant no. 2021MC0AB01; Fudan University-CIOMP Joint Fund, grant no.FC2019–002; Opening Project of Shanghai Robot R&D and Transformation Functional Platform, grant no. KEH2310024; Ji Hua Laboratory, grant no. X190021TB190 and no.X190021TB193; Shanghai Municipal Science and Technology Major Project, grant no. 2021SHZDZX0103 and no. 2018SHZDZX01; ZJ Lab, and Shanghai Center for Brain Science and Brain-Inspired Technology.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoyang Kang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, J., Luo, Y., Wang, L., Zhang, L., Kang, X. (2024). MTSAN-MI: Multiscale Temporal-Spatial Convolutional Self-attention Network for Motor Imagery Classification. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Communications in Computer and Information Science, vol 1963. Springer, Singapore. https://doi.org/10.1007/978-981-99-8138-0_27

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8138-0_27

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8137-3

  • Online ISBN: 978-981-99-8138-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics