Skip to main content

Subject-Independent Brain-Computer Interfaces: A Comparative Study of Attention Mechanism-Driven Deep Learning Models

  • Conference paper
  • First Online:
Intelligent Human Computer Interaction (IHCI 2023)

Abstract

This research examines the employment of attention mechanism driven deep learning models for building subject-independent Brain-Computer Interfaces (BCIs). The research evaluated three different attention models using the Leave-One-Subject-Out cross-validation method. The results showed that the Hybrid Temporal CNN and ViT model performed well on the BCI competition IV 2a dataset, achieving the highest average accuracy and outperforming other models for 5 out of 9 subjects. However, this model did not perform the best on the BCI competition IV 2b dataset when compared to other methods. One of the challenges faced was the limited size of the data, especially for transformer models that require large amounts of data, which affected the performance variability between datasets. This study highlights a beneficial approach to designing BCIs, combining attention mechanisms with deep learning to extract important inter-subject features from EEG data while filtering out irrelevant signals.

This work was supported by Nazarbayev University under the Faculty Development Competitive Research Grant Program (FDCRGP), Grant No. 021220FD2051.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abibullaev, B., Dolzhikova, I., Zollanvari, A.: A brute-force CNN model selection for accurate classification of sensorimotor rhythms in BCIS. IEEE Access 8, 101014–101023 (2020). https://doi.org/10.1109/ACCESS.2020.2997681

    Article  Google Scholar 

  2. Abibullaev, B., Zollanvari, A.: A systematic deep learning model selection for p300-based brain-computer interfaces. IEEE Trans. Syst. Man Cybern. Syst. 52(5), 2744–2756 (2021)

    Article  Google Scholar 

  3. Ball, T., Kern, M., Mutschler, I., Aertsen, A., Schulze-Bonhage, A.: Signal quality of simultaneously recorded invasive and non-invasive EEG. Neuroimage 46(3), 708–716 (2009)

    Article  Google Scholar 

  4. Dai, Y., et al.: MultiChannelSleepNet: a transformer-based model for automatic sleep stage classification with PSG. IEEE J. Biomed. Health Inform. 1–12 (2023). https://doi.org/10.1109/JBHI.2023.3284160

  5. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding (2019). arXiv:1810.04805 [cs]

  6. Dolzhikova, I., Abibullaev, B., Sameni, R., Zollanvari, A.: An ensemble CNN for subject-independent classification of motor imagery-based EEG. In: 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 319–324. IEEE (2021)

    Google Scholar 

  7. Dolzhikova, I., Abibullaev, B., Sameni, R., Zollanvari, A.: Subject-independent classification of motor imagery tasks in EEG using multisubject ensemble CNN. IEEE Access 10, 81355–81363 (2022)

    Article  Google Scholar 

  8. Dosovitskiy, A., et al.: An image is worth \(16 \times 16\) words: transformers for image recognition at scale (2021)

    Google Scholar 

  9. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)

    Google Scholar 

  10. Khan, S., Naseer, M., Hayat, M., Zamir, S.W., Khan, F.S., Shah, M.: Transformers in vision: a survey. ACM Comput. Surv. (CSUR) 54(10s), 1–41 (2022)

    Article  Google Scholar 

  11. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR abs/1412.6980 (2014)

    Google Scholar 

  12. Leeb, R., Lee, F., Keinrath, C., Scherer, R., Bischof, H., Pfurtscheller, G.: Brain-computer communication: motivation, aim, and impact of exploring a virtual apartment. IEEE Trans. Neural Syst. Rehabil. Eng. 15(4), 473–482 (2007). https://doi.org/10.1109/TNSRE.2007.906956

    Article  Google Scholar 

  13. Lu, J., Batra, D., Parikh, D., Lee, S.: VilBERT: pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  14. Peng, R., et al.: TIE-EEGNet: temporal information enhanced EEGNet for seizure subtype classification. IEEE Trans. Neural Syst. Rehabil. Eng. 30, 2567–2576 (2022). https://doi.org/10.1109/TNSRE.2022.3204540

    Article  Google Scholar 

  15. Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer (2020). arXiv:1910.10683 [cs, stat]

  16. Song, Y., Zheng, Q., Liu, B., Gao, X.: EEG conformer: convolutional transformer for EEG decoding and visualization. IEEE Trans. Neural Syst. Rehabil. Eng. 31, 710–719 (2023). https://doi.org/10.1109/TNSRE.2022.3230250

    Article  Google Scholar 

  17. Sun, J., Xie, J., Zhou, H.: EEG classification with transformer-based models. In: 2021 IEEE 3rd Global Conference on Life Sciences and Technologies (LifeTech), pp. 92–93 (2021). https://doi.org/10.1109/LifeTech52111.2021.9391844

  18. Tangermann, M., et al.: Review of the BCI competition IV. Front. Neurosci. 6 (2012). https://doi.org/10.3389/fnins.2012.00055

  19. Tao, Y., et al.: Gated transformer for decoding human brain EEG signals. In: 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 125–130 (2021). https://doi.org/10.1109/EMBC46164.2021.9630210

  20. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  21. Xie, J., et al.: A transformer-based approach combining deep learning network and spatial-temporal information for raw EEG classification. IEEE Trans. Neural Syst. Rehabil. Eng. 30, 2126–2136 (2022). https://doi.org/10.1109/TNSRE.2022.3194600

    Article  Google Scholar 

  22. Zhang, A., Lipton, Z.C., Li, M., Smola, A.J.: Dive into deep learning (2023). arXiv:2106.11342 [cs]

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Berdakh Abibullaev .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Keutayeva, A., Abibullaev, B. (2024). Subject-Independent Brain-Computer Interfaces: A Comparative Study of Attention Mechanism-Driven Deep Learning Models. In: Choi, B.J., Singh, D., Tiwary, U.S., Chung, WY. (eds) Intelligent Human Computer Interaction. IHCI 2023. Lecture Notes in Computer Science, vol 14531. Springer, Cham. https://doi.org/10.1007/978-3-031-53827-8_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-53827-8_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-53826-1

  • Online ISBN: 978-3-031-53827-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics