Skip to main content

Single-Trial Detection of Event-Related Potentials with Artificial Examples Based on Coloring Transformation

  • Conference paper
  • First Online:
Recent Trends in Image Processing and Pattern Recognition (RTIP2R 2022)

Abstract

Non-invasive Brain-Computer Interfaces (BCIs) using electroencephalography (EEG) recordings are the most common type of BCI. The detection of Event-Related Potentials (ERP) corresponding to the presentation of visual stimuli is one of the main paradigms in BCI, such as for the detection of the P300 ERP component that is used in the P300 speller. The typing speed and the information transfer rate in a BCI speller are directly related to the single-trial detection performance. It corresponds to the binary classification of brain evoked responses corresponding to the presentation of stimuli representing targets vs. non-targets. Many techniques have been proposed in the literature, ranging from shallow approaches using linear discriminant analysis to hierarchical and deep learning methods. For BCIs that require a calibration session, reducing its duration is critical for the implementation of BCIs in clinical settings. For this reason, data augmentation approaches allowing to increase the size of the training database can improve performance while keeping the same number of trials for the calibration session. In this paper, we propose to generate artificial trials based on the properties of the distribution of the signals after spatial filtering using the coloring transformation. The approach is compared with other approaches on the single-trial detection of ERPs from a public database of 8 subjects with amyotrophic lateral sclerosis. The results support the conclusion that artificial trials based on the coloring transformation can be used for training a classifier. However, they do not provide a substantial improvement then added as a data augmentation technique, compared to data augmentation using examples shifted temporally.

This study was supported by the NIH-R15 NS118581 project.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Baird, H.: Document image defect models. In: Proceedings of the IAPR Workshop on Syntactic and Structural Pattern Recognition, pp. 38–46 (1990)

    Google Scholar 

  2. Cecotti, H., Marathe, A., Ries, A.: Optimization of single-trial detection of event-related potentials through artificial trials. IEEE Trans. Biomed. Eng. 62(9), 2170–2176 (2015)

    Article  Google Scholar 

  3. Cecotti, H., et al.: A robust sensor selection method for P300 brain-computer interfaces. J. Neural Eng. 8, 016001 (2011)

    Google Scholar 

  4. Farwell, L.A., Donchin, E.: Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalogr. Clin. Neurophysiol. 70(6), 510–523 (1988)

    Article  Google Scholar 

  5. Hartmann, K.G., Schirrmeister, R.T., Ball, T.: EEG-GAN: Generative adversarial networks for electroencephalograhic (EEG) brain signals (2018)

    Google Scholar 

  6. Hoffmann, U., Vesin, J., Diserens, K., Ebrahimi, T.: An efficient P300-based brain-computer interface for disabled subjects. J. Neurosci. Meth. 167(1), 115–125 (2008)

    Article  Google Scholar 

  7. Keysers, D., Deselaers, T., Gollan, C., Ney, H.: Deformation models for image recognition. IEEE Trans. Patt. Anal. Mach. Intell. 29(8), 1422–1435 (2007)

    Google Scholar 

  8. Kutas, M., McCarthy, G., Donchin, E.: Augmenting mental chronometry: the p300 as a measure of stimulus evaluation time. Science 197(4305), 792–795 (1977)

    Article  Google Scholar 

  9. Lees, S., et al.: A review of rapid serial visual presentation-based brain–computer interfaces. J. Neural Eng. 15(2), 021001 (2018)

    Google Scholar 

  10. Luck, S.J., Kappenman, E.S.: The Oxford Handbook of Event-Related Potential Components. Oxford University Press, USA (2011)

    Google Scholar 

  11. Luo, Y., Cai, X., Zhang, Y., Xu, J., et al.: Multivariate time series imputation with generative adversarial networks. Adv. Neural Inf. Process. Syst. 31 (2018)

    Google Scholar 

  12. MacKay, D.J.C.: Bayesian interpolation. Neural Comput. 4(3), 415–447 (1992)

    Article  MATH  Google Scholar 

  13. Polich, J., Kokb, A.: Cognitive and biological determinants of P300: an integrative review. Biol. Psychol. 41, 103–146 (1995)

    Article  Google Scholar 

  14. Rivet, B., Cecotti, H., Maby, E., Mattout, J.: Impact of spatial filters during sensor selection in a visual p300 brain-computer interface. Brain Topogr. 25(1), 55–63 (2012)

    Article  Google Scholar 

  15. Rivet, B., Souloumiac, A.: Optimal linear spatial filters for event-related potentials based on a spatio-temporal model: asymptotical performance analysis. Signal Process. 93(2), 387–398 (2013)

    Article  Google Scholar 

  16. Rivet, B., Souloumiac, A., Attina, V., Gibert, G.: xDAWN algorithm to enhance evoked potentials: application to brain-computer interface. IEEE Trans. Biomed. Eng. 56(8), 2035–2043 (2009)

    Article  Google Scholar 

  17. Schalk, G., McFarland, D.J., Hinterberger, T., Birbaumer, N., Wolpaw, J.R.: BCI 2000: a general-purpose brain-computer interface (bCI) system. IEEE Trans. Biomed. Eng. 51(6), 1034–1043 (2004)

    Article  Google Scholar 

  18. Simard, P.Y., Steinkraus, D., Platt, J.C.: Best practices for convolutional neural networks applied to visual document analysis. In: Proceedings of the 7th International Conference on Document Analysis and Recognition, pp. 958–962 (2003)

    Google Scholar 

  19. Yi, X., Walia, E., Babyn, P.: Generative adversarial network in medical imaging: a review. Med. Image Anal. 58, 101552 (2019)

    Article  Google Scholar 

  20. Özdenizci, O., Erdoğmuş, D.: On the use of generative deep neural networks to synthesize artificial multichannel EEG signals. In: 2021 10th International IEEE/EMBS Conference on Neural Engineering (NER), pp. 427–430 (2021). https://doi.org/10.1109/NER49283.2021.9441381

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hubert Cecotti .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cecotti, H., Jaimes, S. (2023). Single-Trial Detection of Event-Related Potentials with Artificial Examples Based on Coloring Transformation. In: Santosh, K., Goyal, A., Aouada, D., Makkar, A., Chiang, YY., Singh, S.K. (eds) Recent Trends in Image Processing and Pattern Recognition. RTIP2R 2022. Communications in Computer and Information Science, vol 1704. Springer, Cham. https://doi.org/10.1007/978-3-031-23599-3_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-23599-3_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-23598-6

  • Online ISBN: 978-3-031-23599-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics