Skip to main content
Log in

Neural correlates of affective content: application to perceptual tagging of video

  • S.I. : TAM-LHR
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Over the past years, a digital multimedia uprising has been experienced in every walk of life, due to which the un-annotated or unstructured multimedia content has always been a key issue for research. The multimedia content is usually created with some intended emotions, which the creator wants to induce in viewers. The affectiveness of the multimedia content can be measured by analyzing elicited emotions of its viewers. In this paper, we present a rigorous study of human cognition using EEG signals while watching a video, to analyze the affectiveness of video content. The analysis presented in this paper is done to establish an effective relationship between video content and the human emotional state. For this, the most effective scalp location and frequency ranges are identified for two categories of videos, i.e., excited and sad. Furthermore, a common affective response (CAR) is extracted for finding the distinguishable features for aforementioned categories of videos. The CAR is calculated and tested on the publicly available dataset “AMIGOS,” and the results presented here show the utility of cognitive features on extracted scalp locations and frequency ranges for automatic tagging of video content. The current research explores the innovative applicability of neuro-signals for a mouse-free video tagging based on human excitement level to augment a range of brain–computer interface (BCI)-based devices. It can further aid to automatically retrieve the video content which is exciting and interesting to human viewers. With this analysis, we aimed to provide a thorough analysis which can be used to customize a low-cost and mobile EEG system for automatic analysis and retrieval of videos.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

Availability of data and material

AMIGOS.

Code availability

Custom code.

References

  1. Caviedes JE (2012) The evolution of video processing technology and its main drivers. Proc IEEE 100(4):872–877. https://doi.org/10.1109/JPROC.2011.2182072

    Article  Google Scholar 

  2. Dimitrova N, Zhang HJ, Shahraray B, Sezan I, Huang T, Zakhor A (2002) Applications of video-content analysis and retrieval. IEEE Multimed 9(3):42–55. https://doi.org/10.1109/MMUL.2002.1022858

    Article  Google Scholar 

  3. Smith MA, Chen T (2005) 9.1: image and video indexing and retrieval. In: Bovik AL (ed) In communications, networking and multimedia, handbook of image and video processing, 2nd edn. Academic Press, New York. https://doi.org/10.1016/B978-012119792-6/50121-2

    Chapter  Google Scholar 

  4. Isola P, Xiao J, Parikh D, Torralba A, Oliva A (2013) What makes a photograph memorable? IEEE Trans Pattern Anal Mach Intell 36(7):1469–1482. https://doi.org/10.1109/TPAMI.2013.200

    Article  Google Scholar 

  5. Müller V (2008) Margaret A. Boden, Mind as machine: a history of cognitive science, 2 vols. Mind Mach 18:121–125. https://doi.org/10.1007/s11023-008-9091-9

    Article  Google Scholar 

  6. Hassanien AE, Azar A (2014) Brain computer interfaces: current trends and applications, intelligent systems reference library, vol 74. Springer, Cham

    Google Scholar 

  7. Ghaemmaghami P (2017) Information retrieval from neurophysiological signals. Ph.D. thesis, University of Trento

  8. Yang Y, Bloch I, Chevallier S, Wiart J (2015) Subject-specific channel selection using time information for motor imagery brain-computer interfaces. Cogn Comput 8:505–518. https://doi.org/10.1007/s12559-015-9379-z

    Article  Google Scholar 

  9. Duan L, Bao M, Cui S, Qiao Y, Miao J (2017) Motor imagery EEG classification based on kernel hierarchical extreme learning machine. Cogn Comput 9:758–765. https://doi.org/10.1007/s12559-017-9494-0

    Article  Google Scholar 

  10. Padfield N, Zabalza J, Zhao H, Vargas VM, Ren J (2019) EEG-based brain-computer interfaces using motor-imagery: techniques and challenges. Sensors. https://doi.org/10.3390/s19061423

    Article  Google Scholar 

  11. Kumar S, Riddoch MJ, Humphreys G (2013) Mu rhythm desynchronization reveals motoric influences of hand action on object recognition. Front Hum Neurosci 7:66. https://doi.org/10.3389/fnhum.2013.00066

    Article  Google Scholar 

  12. Hiyoshi-Taniguchi K, Kawasaki M, Yokota T, Bakardjian H, Fukuyama H, Cichocki A, Vialatte FB (2015) EEG correlates of voice and face emotional judgments in the human brain. Cogn Comput 7:11–19. https://doi.org/10.1007/s12559-013-9225-0

    Article  Google Scholar 

  13. Li J, Zhang Z, He H (2018) Hierarchical convolutional neural networks for EEG-based emotion recognition. Cogn Comput 10:368–380. https://doi.org/10.1007/s12559-017-9533-x

    Article  Google Scholar 

  14. Gawali BW, Rao S, Abhang P, Rokade P, Mehrotra SC (2012) Classification of EEG signals for different emotional states. In: Fourth international conference on advances in recent technologies in communication and computing (ARTCom2012), pp 177–181. https://doi.org/10.1049/cp.2012.2521

  15. Frydenlund A, Rudzicz F (2015) Emotional affect estimation using video and EEG data in deep neural networks. In: Barbosa D, Milios E (eds) Advances in artificial intelligence. Canadian AI 2015. Lecture notes in computer science, vol 9091. Springer, Cham. https://doi.org/10.1007/978-3-319-18356-5_24

  16. Alarcao SM, Fonseca MJ (2018) Emotions recognition using EEG signals: a survey. IEEE Trans Affect Comput. https://doi.org/10.1109/TAFFC.2017.2714671

    Article  Google Scholar 

  17. Roy Y, Banville H, Albuquerque I, Gramfort A, Falk TH, Faubert J (2019) Deep learning-based electroencephalography analysis: a systematic review. J Neural Eng 16(5):051001. https://doi.org/10.1088/1741-2552/ab260c (PMID: 31151119)

    Article  Google Scholar 

  18. Vecchiato G, Cherubino P, Maglione AG, Ezquierro MT, Marinozzi F, Bini F, Trettel A, Babiloni F (2014) How to measure cerebral correlates of emotions in marketing relevant tasks. Cogn Comput 6:856–871. https://doi.org/10.1007/s12559-014-9304-x

    Article  Google Scholar 

  19. Gupta A, Shreyam R, Garg R, Sayed T (2017) Correlation of neuromarketing to neurology. IOP Conf Ser Mater Sci Eng 225:012129. https://doi.org/10.1088/1757-899X/225/1/012129

    Article  Google Scholar 

  20. Bigdely-Shamlo N, Vankov A, Ramirez RR, Makeig S (2008) Brain activity-based image classification from rapid serial visual presentation. IEEE Trans Neural Syst Rehabil Eng 16(5):432–441. https://doi.org/10.1109/TNSRE.2008.2003381

    Article  Google Scholar 

  21. Wang J, Pohlmeyer E, Hanna B, Jiang YG, Sajda,P, Chang SF (2009) Brain state decoding for rapid image retrieval. In: Proceedings of the 17th ACM international conference on multimedia, pp 945–954. ACM, New York. https://doi.org/10.1145/1631272.1631463

  22. Huang Y, Erdogmus D, Pavel M, Mathan S, Hild KE (2011) A framework for rapid visual image search using single-trial brain evoked responses. Neurocomputing 74(12):2041–2051. https://doi.org/10.1016/j.neucom.2010.12.025

    Article  Google Scholar 

  23. Lees S, Dayan N, Cecotti H, McCullagh P, Maguire L, Lotte F, Coyle D (2018) A review of rapid serial visual presentation-based brain- computer interfaces. J Neural Eng 15(2):021001. https://doi.org/10.1088/1741-2552/aa9817

    Article  Google Scholar 

  24. Kapoor A, Shenoy P (2008) Combining brain computer interfaces with vision for object categorization. In: 2008 IEEE conference on computer vision and pattern recognition, pp 1–8. https://doi.org/10.1109/CVPR.2008.4587618

  25. Mohedano E, Healy G, McGuinness K, Giró-i-Nieto X, O’Connor NE, Smeaton AF (2014) Object segmentation in images using EEG signals. In: Proceedings of the 22Nd ACM international conference on multimedia, pp 417–426. ACM, New York. https://doi.org/10.1145/2647868.2654896

  26. Mohedano E, McGuinness K, Healy G, O’Connor NE, Smeaton AF, Salvador A, Porta S, Nieto XG (2015) Exploring EEG for object detection and retrieval. In: Proceedings of the 5th ACM on international conference on multimedia retrieval, pp 591–594. ACM, New York. https://doi.org/10.1145/2671188.2749368

  27. Healy G, Smeaton AF (2011) Optimising the number of channels in EEG-augmented image search. In: Proceedings of the 25th BCS conference on human–computer interaction, pp 157–162. British Computer Society, Swinton

  28. Soleymani M, Pantic M (2013) Multimedia implicit tagging using EEG signals. In: 2013 IEEE international conference on multimedia and expo (ICME), San Jose, CA, USA, 2013, pp 1–6. https://doi.org/10.1109/ICME.2013.6607623

  29. Tauscher JP, Mustafa M, Magnor M (2017) Comparative analysis of three different modalities for perception of artifacts in videos. ACM Trans Appl Percept. https://doi.org/10.1145/3129289

    Article  Google Scholar 

  30. Mutasim AK, Tipu RS, Bashar MR, Amin MA (2017) Video category classification using wireless EEG. In: Zeng Y, He Y, Kotaleski JH, Martone M, Xu B, Peng H, Luo Q (eds) Brain informatics. Lecture notes in computer science, vol 10654. Springer, Cham, pp 39–48. https://doi.org/10.1007/978-3-319-70772-3_4

  31. Nussbaum PA, Herrera A, Joshi R, Hargraves R (2012) Analysis of viewer EEG data to determine categorization of short video clip. Procedia Comput Sci 12:158–163. https://doi.org/10.1016/j.procs.2012.09.047

    Article  Google Scholar 

  32. Wehbe RR, Kappen DL, Rojas D, Klauser M, Kapralos B, Nacke LE (2013) EEG-based assessment of video and in-game learning. CHI Ext Abstr. https://doi.org/10.1145/2468356.2468474

    Article  Google Scholar 

  33. Moon J, Kim Y, Lee H, Bae C, Yoon WC (2013) Extraction of user preference for video stimuli using EEG-based user responses. ETRI J 35(6):1105–1114. https://doi.org/10.4218/etrij.13.0113.0194

    Article  Google Scholar 

  34. Salehin MM, Paul M (2017) Affective video events summarization using EMD decomposed EEG signals (EDES). In: 2017 international conference on digital image computing: techniques and applications (DICTA), pp 1–6. https://doi.org/10.1109/DICTA.2017.8227402

  35. Baveye Y, Chamaret C, Dellandréa E, Chen L (2018) Affective video content analysis: a multidisciplinary insight. IEEE Trans Affect Comput 9(4):396–409. https://doi.org/10.1109/TAFFC.2017.2661284

    Article  Google Scholar 

  36. Hanjalic A, Xu L (2005) Affective video content representation and modeling. IEEE Trans Multimed 7(1):143–154. https://doi.org/10.1109/TMM.2004.840618

    Article  Google Scholar 

  37. Correa JAM, Abadi MK, Sebe N, Patras I (2018) AMIGOS: a dataset for affect, personality and mood research on individuals and groups. IEEE Trans Affect Comput 12(2):479–493. https://doi.org/10.1109/TAFFC.2018.2884461

    Article  Google Scholar 

  38. Kossaifi J, Tzimiropoulos G, Todorovic S, Pantic M (2017) AFEW-VA database for valence and arousal estimation in-the-wild. Image Vis Comput 65(C):23–36. https://doi.org/10.1016/j.imavis.2017.02.001

    Article  Google Scholar 

  39. Abadi MK, Subramanian R, Kia SM, Avesani P, Patras I, Sebe N (2015) DECAF: MEG-based multimodal database for decoding affective physiological responses. IEEE Trans Affect Comput 6(3):209–222. https://doi.org/10.1109/TAFFC.2015.2392932

    Article  Google Scholar 

  40. Soleymani M, Lichtenauer J, Pun T, Pantic M (2012) A multimodal database for affect recognition and implicit tagging. IEEE Trans Affect Comput 3(1):42–55. https://doi.org/10.1109/T-AFFC.2011.25

    Article  Google Scholar 

  41. Akansu AN, Haddad RA (2001) Chapter 6: wavelet transform. In: Akansu AN, Haddad RA (eds) Multiresolution signal decomposition, 2nd edn. Academic Press, London, pp 391–442. https://doi.org/10.1016/B978-012047141-6/50006-9

    Chapter  MATH  Google Scholar 

  42. Kehtarnavaz N (2008) Chapter 7: frequency domain processing. In: Kehtarnavaz N (ed) Digital signal processing system design, 2nd edn. Academic Press, London, pp 175–196. https://doi.org/10.1016/B978-0-12-374490-6.00007-6

    Chapter  Google Scholar 

  43. Vivas EL, García-González A, Figueroa I, Fuentes RQ (2013) Discrete wavelet transform and ANFIS classifier for brain-machine interface based on EEG. In: 2013 6th international conference on human system interactions (HSI), pp 137–144. https://doi.org/10.1109/HSI.2013.6577814

  44. Subasi A (2007) EEG signal classification using wavelet feature extraction and a mixture of expert model. Expert Syst Appl 32:1084–1093. https://doi.org/10.1016/j.eswa.2006.02.005

    Article  Google Scholar 

  45. Welch P (1967) The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms. IEEE Trans Audio Electroacoust 15(2):70–73. https://doi.org/10.1109/TAU.1967.1161901

    Article  Google Scholar 

  46. Doma V, Pirouz M (2020) A comparative analysis of machine learning methods for emotion recognition using EEG and peripheral physiological signals. J Big Data 7:18. https://doi.org/10.1186/s40537-020-00289-7

    Article  Google Scholar 

  47. Hu X, Chen J, Wang F, Zhang D (2019) Ten challenges for EEG-based affective computing. Brain Sci Adv 5(1):1–20. https://doi.org/10.1177/2096595819896200

    Article  Google Scholar 

  48. Bezugam S, Majumdar S, Ralekar C, Gandhi T (2021) Efficient video summarization framework using EEG and eye-tracking signals. ArXiv: arXiv:2101.11249

Download references

Acknowledgements

The study presented in this paper is based on the dataset “AMIGOS” [37] collected by Juan Abdon et al. In this dataset, a wide range of physiological parameter collection of participants is done using EEG, ECG, and GSR for videos taken from all valence/arousal quadrants. We would like to thank the developers for providing us access to this dataset.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

SS contributed to conceptualization, methodology, software, data curation, validation, and writing—original draft preparation. AKD contributed to conceptualization, methodology, supervision, reviewing and editing. PR contributed to conceptualization, supervision, reviewing and editing. A contributed to reviewing and editing.

Corresponding author

Correspondence to Ashwani Kumar Dubey.

Ethics declarations

Consent to participate

I consent to participate.

Consent for publication

I consent for publication.

Informed consent

The article contains methodologies performed on publicly available data AMIGOS. As per the dataset, description participants have provided the written consent before participation to the developers.

Ethics approval

Not applicable.

Human and animal rights

This article does not contain any studies with animals performed by any of the authors.

Conflict of interest

There is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sharma, S., Dubey, A.K., Ranjan, P. et al. Neural correlates of affective content: application to perceptual tagging of video. Neural Comput & Applic 35, 7925–7941 (2023). https://doi.org/10.1007/s00521-021-06591-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-06591-6

Keywords

Navigation