Skip to main content

Research on Brain-Computer Interfaces in the Entertainment Field

  • Conference paper
  • First Online:
Human-Computer Interaction (HCII 2023)

Abstract

Brain-computer interfaces (BCI) have become commonplace in human-computer interaction. New forms of interaction were incorporated, innovating in the ways in which individuals exchange information with computational systems. A BCI that has become increasingly common is based on electroencephalography (EEG), that is, on the reading of brainwaves and the consequent generation of binary data to be used by computers. This type of interface is more common and widespread in the health field. Research has shown great potential in the treatment of trauma, both physical and psychological. However, few studies were identified on the use of EEG as BCI in other areas of knowledge, such as entertainment and fruition of audiovisual content. Although headsets are commercially available with a wide variety of formats and prices, there is a limitation of studies on the use of this technology for mapping emotions, tastes, and subjective relationships with audiovisual content. Within this context, a survey was carried out, in the form of a systematic literature review (SLR), to identify research and scientific projects in progress, with complete or partial results, on brain-computer interfaces in fields related to entertainment studies. The focus is to understand how the topic of emotion is addressed in research based on electroencephalography and if there is research that points to the use of EEG-based BCIs to identify emotions during the audiovisual enjoyment process. Analyzing the three most important databases for the area of human-computer interaction, ACM, Springer, and IEEE, and applying the inclusion and exclusion criteria, 56 articles on the subject were identified. A synthesis of these papers is presented in this article.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Li, Y., Zheng, W., Zong, Y., Cui, Z., Zhang, T., Zhou, X.: A bi-hemisphere domain adversarial neural network model for EEG emotion recognition. IEEE Trans. Affect. Comput. 12(2), 494–504 (2021). https://doi.org/10.1109/TAFFC.2018.2885474

    Article  Google Scholar 

  2. Dattada, V.V.M., Jeevan, M.: Analysis of concealed anger emotion in a neutral speech signal. In: 2019 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER), pp. 1–5 (2019). https://doi.org/10.1109/DISCOVER47552.2019.9008037

  3. Alimuradov, A.K., Tychkov, A.Y., Churakov, P.P.: A novel approach to speech signal segmentation based on empirical mode decomposition to assess human psycho-emotional state. In: 2019 3rd School on Dynamics of Complex Networks and their Application in Intellectual Robotics (DCNAIR), pp. 9–12 (2019). https://doi.org/10.1109/DCNAIR.2019.8875525

  4. Mohammadi, G., Vuilleumier, P.: A multi-componential approach to emotion recognition and the effect of personality. IEEE Trans. Affect. Comput. 13(3), 1127–1139 (2022). https://doi.org/10.1109/TAFFC.2020.3028109

    Article  Google Scholar 

  5. Hortensius, R., Hekele, F., Cross, E.S.: The perception of emotion in artificial agents. IEEE Trans. Cogn. Dev. Syst. 10(4), 852–864 (2018)

    Article  Google Scholar 

  6. Farahani, F.S., Sheikhan, M., Farrokhi, A.: A fuzzy approach for face emotion recognition. In: 2013 13th Iranian Conference on Fuzzy Systems (IFSC) (2013). It hurts. https://doi.org/10.1109/IFSC.2013.6675597

  7. Rakshit, R., Reddy, V.R., Deshpande, P.: Emotion detection and recognition using HRV features derived from photoplethysmogram signals. In: Proceedings of the 2nd Workshop on Emotion Representations and Modeling for Companion Systems (ERM4CT 2016), pp. 1–6. Association for Computing Machinery, New York (2016). Article 2. https://doi.org/10.1145/3009960.3009962

  8. Gjoreski, H., et al.: emteqPRO: face-mounted mask for emotion recognition and affective computing. In: Adjunct Proceedings of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2021 ACM International Symposium on Wearable Computers (UbiComp 2021), pp. 23–25. Association for Computing Machinery, New York (2021). https://doi.org/10.1145/3460418.3479276

  9. Chao, L., Tao, J., Yang, M., Li, Y., Wen, Z.: Long short term memory recurrent neural network based multimodal dimensional emotion recognition. In: Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge (AVEC 2015), pp. 65–72. Association for Computing Machinery, New York (2015). https://doi.org/10.1145/2808196.2811634

  10. Bryant, D., Howard, A.: A comparative analysis of emotion-detecting AI systems with respect to algorithm performance and dataset diversity. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (AIES 2019), pp. 377–382. Association for Computing Machinery, New York (2019). https://doi.org/10.1145/3306618.331428411

  11. Hassan, S.A., Akbar, S., Rehman, A., Saba, T., Kolivand, H., Bahaj, S.A.: Recent developments in detection of central serous retinopathy through imaging and artificial intelligence techniques–a review. IEEE Access 9, 168731–168748 (2021). https://doi.org/10.1109/ACCESS.2021.3108395

    Article  Google Scholar 

  12. Djavanshir, G.R., Chen, X., Yang, W.: A review of artificial intelligence’s neural networks (deep learning) applications in medical diagnosis and prediction. IT Prof. 23(3), 58–62 (2021). https://doi.org/10.1109/MITP.2021.3073665.10

    Article  Google Scholar 

  13. Valenza, G., Citi, L., Lanata, A., Scilingo, E.P., Barbieri, R.: A nonlinear heartbeat dynamics model approach for personalized emotion recognition. In: 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 2579–2582 (2013). https://doi.org/10.1109/EMBC.2013.6610067

  14. Kim, D.H., Seo, D.S.: Vector based 3D emotion expression for emotion robot. In: Proceedings of the 5th International Conference on Mechatronics and Robotics Engineering (ICMRE 2019), pp. 113–117. Association for Computing Machinery, New York (2019). https://doi.org/10.1145/3314493.3314499

  15. Faita, C., Vanni, F., Tanca, C., Ruffaldi, E., Carrozzino, M., Bergamasco, M.: Investigating the process of emotion recognition in immersive and non-immersive virtual technological setups. In: Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology (VRST 2016), pp. 61–64. Association for Computing Machinery, New York (2016). https://doi.org/10.1145/2993369.2993395

  16. Menezes, M.L.R., et al.: Towards emotion recognition for virtual environments: an evaluation of eeg features on benchmark dataset. Pers. Ubiquit. Comput. 21(6), 1003–1013 (2017). https://doi.org/10.1007/s00779-017-1072-7

    Article  MathSciNet  Google Scholar 

  17. Chao, L., Tao, J., Yang, M., Li, Y., Wen, Z.: Multi-scale temporal modeling for dimensional emotion recognition in video. In: Proceedings of the 4th International Workshop on Audio/Visual Emotion Challenge (AVEC 2014), pp. 11–18. Association for Computing Machinery, New York (2014). https://doi.org/10.1145/2661806.2661811

  18. Jiang, H., Deng, Z., Xu, M., He, X., Mao, T., Wang, Z.: An emotion evolution based model for collective behavior simulation. In: Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D 2018), pp. 1–6. Association for Computing Machinery, New York (2018). Article 10. https://doi.org/10.1145/3190834.3190844

  19. Horlings, R., Datcu, D., Rothkrantz, L.J.M.: Emotion recognition using brain activity. In: Proceedings of the 9th International Conference on Computer Systems and Technologies and Workshop for PhD Students in Computing (CompSysTech 2008), pp. II.1–1. Association for Computing Machinery, New York (2008). Article 6. https://doi.org/10.1145/1500879.1500888

  20. Ma, J., Tang, H., Zheng, W.-L., Lu, B.-L.: Emotion recognition using multimodal residual LSTM network. In: Proceedings of the 27th ACM International Conference on Multimedia (MM 2019), pp. 176–183. Association for Computing Machinery, New York (2019). https://doi.org/10.1145/3343031.3350871

  21. Zhao, M., Adib, F., Katabi, D.: Emotion recognition using wireless signals. Commun. ACM 61(9), 91–100 (2018). https://doi.org/10.1145/3236621

    Article  Google Scholar 

  22. Huang, Z., Dong, M., Mao, Q., Zhan, Y.: Speech emotion recognition using CNN. In: Proceedings of the 22nd ACM International Conference on Multimedia (MM 2014), pp. 801–804. Association for Computing Machinery, New York (2014). https://doi.org/10.1145/2647868.2654984

  23. Liogienė, T., Tamulevičius, G.: SFS feature selection technique for multistage emotion recognition. In: 2015 IEEE 3rd Workshop on Advances in Information, Electronic and Electrical Engineering (AIEEE), pp. 1–4 (2015). https://doi.org/10.1109/AIEEE.2015.7367299

  24. Wei, G., Jian, L., Mo, S.: Multimodal (audio, facial and gesture) based emotion recognition challenge. In: 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020), pp. 908–911 (2020). https://doi.org/10.1109/FG47880.2020.00142

  25. Sokolov, D., Patkin, M.: Real-time emotion recognition on mobile devices. In: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), p. 787 (2018). https://doi.org/10.1109/FG.2018.00124

  26. Keshari, T., Palaniswamy, S.: Emotion recognition using feature-level fusion of facial expressions and body gestures. In: 2019 International Conference on Communication and Electronics Systems (ICCES), pp. 1184–1189 (2019). https://doi.org/10.1109/ICCES45898.2019.9002175

  27. Gonuguntla, V., Kim, J.-H.: EEG-based functional connectivity representation using phase locking value for brain network based applications. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2853–2856 (2020). https://doi.org/10.1109/EMBC44109.2020.9175397

  28. Gümüslü, E., Barkana, D.E., Köse, H.: Emotion recognition using EEG and physiological data for robot-assisted rehabilitation systems. In: Companion Publication of the 2020 International Conference on Multimodal Interaction (ICMI 2020 Companion), pp. 379–387. Association for Computing Machinery, New York (2021). https://doi.org/10.1145/3395035.3425199

  29. Gao, Z., Wang, S.: Emotion recognition from EEG signals using hierarchical Bayesian network with privileged information. In: Proceedings of the 5th ACM on International Conference on Multimedia Retrieval (ICMR 2015), pp. 579–582. Association for Computing Machinery, New York (2015). https://doi.org/10.1145/2671188.2749364

  30. Yang, T., Huang, W., Toe, K.K.: Statistical modeling on motion trajectories for robotic laparoscopic surgery. In: 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 4347–4350 (2017). https://doi.org/10.1109/EMBC.2017.8037818

  31. Prajapati, S., Naika, C.L.S., Jha, S.S., Nair, S.B.: On rendering emotions on a robotic face. In: Proceedings of Conference on Advances In Robotics (AIR 2013), pp. 1–7. Association for Computing Machinery, New York (2013). https://doi.org/10.1145/2506095.2506151

  32. Bekele, E., et al.: Multimodal adaptive social interaction in virtual environment (MASI-VR) for children with Autism spectrum disorders (ASD). In: 2016 IEEE Virtual Reality (VR), pp. 121–130 (2016). https://doi.org/10.1109/VR.2016.7504695

  33. Gill, R., Singh, J.: A review of neuromarketing techniques and emotion analysis classifiers for visual-emotion mining. In: 2020 9th International Conference System Modeling and Advancement in Research Trends (SMART), pp. 103–108 (2020). https://doi.org/10.1109/SMART50582.2020.9337074

  34. Schaat, S., et al.: Emotion in consumer simulations for the development and testing of recommendations for marketing strategies. In: Proceedings of the 3rd Workshop on Emotions and Personality in Personalized Systems 2015 (EMPIRE 2015), pp. 25–32. Association for Computing Machinery, New York (2015). https://doi.org/10.1145/2809643.2809649

  35. Sivagnanam, S., Yoshimoto, K., Carnevale, N.T., Majumdar, A.: The neuroscience gateway: enabling large scale modeling and data processing in neuroscience. In: Proceedings of the Practice and Experience on Advanced Research Computing (PEARC 2018), pp. 1–7. Association for Computing Machinery, New York (2018). Article 52. https://doi.org/10.1145/3219104.3219139

  36. Guzzi, J., Giusti, A., Gambardella, L.M., Di Caro, G.A.: A model of artificial emotions for behavior-modulation and implicit coordination in multi-robot systems. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2018), pp. 21–28. Association for Computing Machinery, New York (2018). https://doi.org/10.1145/3205455.3205650

  37. Garcia, D., Schweitzer, F.: Modeling online collective emotions. In: Proceedings of the 2012 Workshop on Data-Driven User Behavioral Modeling and Mining from Social Media (DUBMMSM 2012), pp. 37–38. Association for Computing Machinery, New York (2012). https://doi.org/10.1145/2390131.2390147

  38. Saini,T.S., Bedekar, M., Zahoor, S.: Circle of emotions in life: emotion mapping in 2dimensions. In: Proceedings of the 9th International Conference on Computer and Automation Engineering (ICCAE 2017), pp. 83–88. Association for Computing Machinery, New York (2017). https://doi.org/10.1145/3057039.3057046

  39. Kata, G., Poleszak, W.: Cognitive functioning and safety determinants in the work of a train drivers. Acta Neuropsychologica 19(2), 279–291 (2021). https://doi.org/10.5604/01.3001.0014.9958

    Article  Google Scholar 

  40. Madlenak, R., Masek, J., Madlenakova, L.: An experimental analysis of the driver’s attention during train driving. Open Eng. 10(1), 64–73 (2020). https://doi.org/10.1515/eng-2020-0011

    Article  Google Scholar 

  41. Suzuki, D., Yamauchi, K., Matsuura, S.: Effective visual behavior of railway drivers for recognition of extraordinary events. Q. Rep. RTRI 60, 286–291 (2019). https://doi.org/10.2219/rtriqr.60.4_286

    Article  Google Scholar 

  42. Silversmith, D., et al.: Plug-and-play control of a brain–computer interface through neural map stabilization. Nat. Biotechnol. 39(3), 326–335 (2021)

    Article  Google Scholar 

  43. Zeng, Y., Sun, K., Lu, E.: Declaration on the ethics of brain–computer interfaces and augment intelligence. AI Ethics 1(3), 209–211 (2021). https://doi.org/10.1007/s43681-020-00036-x

    Article  Google Scholar 

  44. Wanga, C., Yi, H., Wang, W., Valliappan, P.: Lesion location algorithm of high-frequency epileptic signal based on Teager energy operator 47, 262–275 (2019). ISSN: 1746-8094. https://www.sciencedirect.com/science/article/abs/pii/S1746809418302313

  45. Saesa, M., Meskers, C.G.M., Daffertshofer, A., van Wegen, E.E.H., Kwakkel, G.: Are early measured resting-state EEG parameters predictive for upper limb engine impairment six months poststroke? 132(1), 56–62 (2021). ISSN: 1388-2457. https://doi.org/10.1016/j.clinph.2020.09.031

  46. Martin, C.W. (ed.): The Philosophy of Deception, 1st edn, pp. 3–11. Oxford University Press on Demand (2013). ISBN: 9780195327939

    Google Scholar 

  47. Yap, C.H., et al.: 3D-CNN for facial micro-and macro-expression spotting on long video sequences using temporal oriented reference frame. In: Proceedings of the 30th ACM International Conference on Multimedia (MM 2022), pp. 7016–7020. Association for Computing Machinery, New York (2022). https://doi.org/10.1145/3503161.3551570

  48. Reddy, S.P.T., Karri, S.T., Dubey, S.R., Mukherjee, S.: Spontaneous facial micro-expression recognition using 3D spatiotemporal convolutional neural networks. In: 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2019). https://doi.org/10.1109/IJCNN.2019.8852419

  49. Romero,K., Yumi, E., Camargo, S., Ferrari, F.: Systematic Review of Literature in Software Engineering Theory and Practice. 1st edn. LTC (2017). ISBN: 9788535286410

    Google Scholar 

  50. Tarozzi, M.: What is grounded theory? Research methodology and theory based on the data. Translation by Carmem Lussi. Petrópolis: Voices (2011)

    Google Scholar 

Download references

Acknowledgments

This work was funded by the Public Call n. 03 Produtividade em Pesquisa PROPESQ/PRPG/UFPB proposal code PVL13414-2020.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Valdecir Becker .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

de Queiroz Cavalcanti, D., Melo, F., Silva, T., Falcão, M., Cavalcanti, M., Becker, V. (2023). Research on Brain-Computer Interfaces in the Entertainment Field. In: Kurosu, M., Hashizume, A. (eds) Human-Computer Interaction. HCII 2023. Lecture Notes in Computer Science, vol 14011. Springer, Cham. https://doi.org/10.1007/978-3-031-35596-7_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-35596-7_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-35595-0

  • Online ISBN: 978-3-031-35596-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics