Skip to main content

EEG Representations of Spatial and Temporal Features in Imagined Speech and Overt Speech

  • Conference paper
  • First Online:
Pattern Recognition (ACPR 2019)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12047))

Included in the following conference series:

Abstract

Imagined speech is an emerging paradigm for intuitive control of the brain-computer interface based communication system. Although the decoding performance of the imagined speech is improving with actively proposed architectures, the fundamental question about ‘what component are they decoding?’ is still remaining as a question mark. Considering that the imagined speech refers to an internal mechanism of producing speech, it may naturally resemble the distinct features of the overt speech. In this paper, we investigate the close relation of the spatial and temporal features between imagined speech and overt speech using electroencephalography signals. Based on the common spatial pattern feature, we acquired 16.2% and 59.9% of averaged thirteen-class classification accuracy (chance rate = 7.7%) for imagined speech and overt speech, respectively. Although the overt speech showed significantly higher classification performance compared to the imagined speech, we found potentially similar common spatial pattern of the identical classes of imagined speech and overt speech. Furthermore, in the temporal feature, we examined the analogous grand averaged potentials of the highly distinguished classes in the two speech paradigms. Specifically, the correlation of the amplitude between the imagined speech and the overt speech was 0.71 in the class with the highest true positive rate. The similar spatial and temporal features of the two paradigms may provide a key to the bottom-up decoding of imagined speech, implying the possibility of robust classification of multiclass imagined speech. It could be a milestone to comprehensive decoding of the speech-related paradigms, considering their underlying patterns.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. García-Salinas, J.S., Villaseñor-Pineda, L., Reyes-García, C.A., Torres-García, A.A.: Transfer learning in imagined speech EEG-based BCIs. Biomed. Signal Process. Control 50, 151–157 (2019)

    Article  Google Scholar 

  2. Schultz, T., Wand, M., Hueber, T., Krusienski, D.J., Herff, C., Brumberg, J.S.: Biosignal-based spoken communication: a survey. IEEE Trans. Audio Speech Lang. Process. 25(12), 2257–2271 (2017)

    Article  Google Scholar 

  3. Rezeika, A., Benda, M., Stawicki, P., Gembler, F., Saboor, A., Volosyak, I.: Brain-computer interface spellers: a review. Brain Sci. 8(4), 1–38 (2018)

    Article  Google Scholar 

  4. Kwak, N.-S., Müller, K.-R., Lee, S.-W.: A convolutional neural network for steady state visual evoked potential classification under ambulatory environment. PLoS ONE 12(2), e0172578 (2017)

    Article  Google Scholar 

  5. Yeom, S.-K., Fazli, S., Müller, K.-R., Lee, S.-W.: An efficient ERP-based brain-computer interface using random set presentation and face familiarity. PLoS ONE 9(11), e111157 (2014)

    Article  Google Scholar 

  6. Won, D.-O., Hwang, H.-J., Dähne, S., Müller, K.-R., Lee, S.-W.: Effect of higher frequency on the classification of steady-state visual evoked potentials. J. Neural Eng. 13(1), 016014–016024 (2015)

    Article  Google Scholar 

  7. Nicolas-Alonso, L.F., Gomez-Gil, J.: Brain computer interfaces, a review. Sensors 12(2), 1211–1279 (2012)

    Article  Google Scholar 

  8. Gerven, M.V., et al.: The brain-computer interface cycle. J. Neural Eng. 6(4), 041001–041011 (2009)

    Article  Google Scholar 

  9. Qureshi, M.N.I., Min, B., Park, H.J., Cho, D., Choi, W., Lee, B.: Multiclass classification of word imagination speech with hybrid connectivity features. IEEE Trans. Biomed. Eng. 65(10), 2168–2177 (2018)

    Article  Google Scholar 

  10. Nguyen, C.H., Karavas, G.K., Artemiadis, P.: Inferring imagined speech using EEG signals: a new approach using Riemannian manifold features. J. Neural Eng. 15(1), 016002–016018 (2018)

    Article  Google Scholar 

  11. Sereshkeh, A.R., Trott, R., Bricout, A., Chau, T.: EEG classification of covert speech using regularized neural networks. IEEE/ACM Trans. Audio Speech Lang. Process. 25(12), 2292–2300 (2017)

    Article  Google Scholar 

  12. Zhao, S., Rudzicz, F.: Classifying phonological categories in imagined and articulated speech. In: 40th International Proceedings on Acoustics. Speech and Signal Processing, pp. 992–996. IEEE, Brisbane (2015)

    Google Scholar 

  13. Lee, S.-H., Lee, M., Jeong, J.-H., Lee, S.-W.: Towards an EEG-based intuitive BCI communication system using imagined speech and visual imagery. In: Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, pp. 4409–4414. IEEE, Bari (2019)

    Google Scholar 

  14. Sitaram, R., et al.: Temporal classification of multichannel near-infrared spectroscopy signals of motor imagery for developing a brain-computer interface. Neuroimage 34(4), 1416–1427 (2007)

    Article  Google Scholar 

  15. Suk, H.-I., Lee, S.-W.: Subject and class specific frequency bands selection for multiclass motor imagery classification. Int. J. Imag. Syst. Tech. 21(2), 123–130 (2011)

    Article  Google Scholar 

  16. Jeong, J.-H., Shim, K.-H., Cho, J.-H., Lee, S.-W.: Trajectory decoding of arm reaching movement imageries for brain-controlled robot arm system. In: Proceedings of the IEEE Engineering in Medicine and Biology Society, pp. 1–4. IEEE, Berlin (2019)

    Google Scholar 

  17. Lee, M., et al.: Motor imagery learning across a sequence of trials in stroke patients. Restor. Neurol. Neurosci. 34(4), 635–645 (2016)

    Google Scholar 

  18. DaSalla, C.S., Kambara, H., Sato, M., Koike, Y.: Single-trial classification of vowel speech imagery using common spatial patterns. Neural Netw. 22(12), 1334–1339 (2009)

    Article  Google Scholar 

  19. Leuthardt, E.C., et al.: Using the electrocorticographic speech network to control a brain-computer interface in humans. J. Neural Eng. 8(3), 036004–036014 (2011)

    Article  Google Scholar 

  20. Towle, V.L., et al.: ECoG gamma activity during a language task: differentiating expressive and receptive speech areas. Brain 131(8), 2013–2027 (2008)

    Article  Google Scholar 

  21. Iotzov, I., Parra, L.C.: EEG can predict speech intelligibility. J. Neural Eng. 16(3), 036008–036018 (2019)

    Article  Google Scholar 

  22. Martin, S., et al.: Decoding spectrotemporal features of overt and covert speech from the human cortex. Front. Neuroeng. 7(14), 1–15 (2014)

    Google Scholar 

  23. Patak, L., Gawlinski, A., Fung, N.I., Doering, L., Berg, J., Henneman, E.A.: Communication boards in critical care: patients’ views. Appl. Nurs. Res. 19(4), 182–190 (2006)

    Article  Google Scholar 

  24. Kitzing, P., Ahlsen, E., Jonsson, B.: Communication aids for people with aphasia. Logoped Phoniatr Vocol. 30(1), 41–46 (2005)

    Article  Google Scholar 

  25. Wu, W., Gao, X., Gao, S.: One-Versus-the-Rest (OVR) algorithm: an extension of Common Spatial Patterns (CSP) algorithm to multi-class case. In: Proceedings of the IEEE Engineering in Medicine and Biology Society, pp. 2387–2390. IEEE, Shanghai (2005)

    Google Scholar 

  26. Blankertz, B., Lemm, S., Treder, M., Haufe, S., Muller, K.R.: Single-trial analysis and classification of ERP components–a tutorial. Neuroimage 56(2), 814–825 (2011)

    Article  Google Scholar 

  27. Lepeschkin, E., Surawicz, M.: Characteristics of true-positive and false-positive results of electrocardiographs master two-step exercise tests. N. Engl. J. Med. 258(11), 511–520 (1958)

    Article  Google Scholar 

  28. Theodorsson-Norheim, E.: Friedman and Quade tests: basic computer program to perform nonparametric two-way analysis of variance and multiple comparisons on ranks of several related samples. Comput. Viol. Med. 17(2), 85–99 (1987)

    Article  Google Scholar 

  29. Ruxton, G.D.: The unequal variance t-test is an underused alternative to Student’s t-test and the Mann-Whitney U test. Behav. Ecol. 17(4), 688–690 (2006)

    Article  Google Scholar 

  30. Müller-Putz, G.R., Scherer, R., Brunner, C., Leeb, R., Pfurtscheller, G.: Better than random? A closer look on BCI results. Int. J. Bioelectromagnetism 10(1), 52–55 (2008)

    Google Scholar 

  31. Pfurtscheller, G., Neuper, C.: Motor imagery and direct brain-computer communication. Proc. IEEE 89(7), 1123–1134 (2001)

    Article  Google Scholar 

  32. Kosslyn, S.M., Tompson, W.L.: Shared mechanisms in visual imagery and visual perception: insights from cognitive neuroscience. New Cogn. Neurosci., 975–986 (2000)

    Google Scholar 

  33. Berndt, D.J., Clifford, J.: Using dynamic time warping to find patterns in time series. In: Proceedings of KDD Workshop, pp. 359–370. ACM (1994)

    Google Scholar 

  34. Roh, M.-C., Shin, H.-K., Lee, S.-W.: View-independent human action recognition with volume motion template on single stereo camera. Pattern Recogn. Lett. 31(7), 639–647 (2010)

    Article  Google Scholar 

  35. Lee, M., et al.: Connectivity differences between consciousness and unconsciousness in non-rapid eye movement sleep: a TMS–EEG study. Sci. Rep. 9(1), 5175 (2019)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the Institute for Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (No. 2017-0-00451; Development of BCI based Brain and Cognitive Computing Technology for Recognizing User’s Intentions using Deep Learning). The authors thank D.-K. Han for the useful discussion of the data analysis.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Seong-Whan Lee .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lee, SH., Lee, M., Lee, SW. (2020). EEG Representations of Spatial and Temporal Features in Imagined Speech and Overt Speech. In: Palaiahnakote, S., Sanniti di Baja, G., Wang, L., Yan, W. (eds) Pattern Recognition. ACPR 2019. Lecture Notes in Computer Science(), vol 12047. Springer, Cham. https://doi.org/10.1007/978-3-030-41299-9_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-41299-9_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-41298-2

  • Online ISBN: 978-3-030-41299-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics