Skip to main content

Decoding Individual and Shared Experiences of Media Perception Using CNN Architectures

  • Conference paper
  • First Online:
Medical Image Understanding and Analysis (MIUA 2023)

Abstract

The brain is an incredibly complex organ capable of perceiving and interpreting a wide range of stimuli. Depending on individual brain chemistry and wiring, different people decipher the same stimuli differently, conditioned by their life experiences and environment. This study’s objective is to decode how the CNN models capture and learn these differences and similarities in brain waves using three publicly available EEG datasets. While being exposed to a variety of media stimuli, each brain produces unique brain waves with some similarity to other neural signals to the same stimuli. However, to figure out whether our neural models are able to interpret and distinguish the common and unique signals correctly, we employed three widely used CNN architectures to interpret brain signals. We extracted the pre-processed versions of the EEG data and identified the dependency of time windows on feature learning for song and movie classification tasks, along with analyzing the performance of models on each dataset. While the minimum length snippet of 5 s was enough for the personalized model, the maximum length snippet of 30 s proved to be the most efficient in the case of the generalized model. The usage of a deeper architecture, i.e., DeepConvNet was found to be the best for extracting personalized and generalized features with the NMED-T and SEED datasets. However, EEGNet gave a better performance on the NMED-H dataset. Maximum accuracy of 69%, 100%, and 56% was achieved in the case of the personalized model on NMED-T, NMED-H, and SEED datasets, respectively. However, the maximum accuracies dropped to 18%, 37%, and 14% on NMED-T, NMED-H, and SEED datasets, respectively, in the generalized model. We achieved a 5% improvement over the state of the art while examining shared experiences on NMED-T. This marked the out-of-distribution generalization problem and signified the role of individual differences in media perception, thus emphasizing the development of personalized models along with generalized models with shared features at a certain level.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Building the world’s most valuable brain data models. www.kernel.com/

  2. Transforming music into medicine. www.lucidtherapeutics.com/

  3. Bedmutha, P., Pandey, P., Ahmed, N., Miyapuram, K.P., Lomas, D.: Canonical correlation analysis (CCA) reveal neural entrainment for each song and similarity among genres (2022)

    Google Scholar 

  4. Chaudhary, S., Pandey, P., Miyapuram, K.P., Lomas, D.: Classifying EEG signals of mind-wandering across different styles of meditation. In: Brain Informatics: 15th International Conference, BI 2022, Padua, Italy, 15–17 July 2022, Proceedings, pp. 152–163. Springer, Heidelberg (2022). https://doi.org/10.1007/978-3-031-15037-1_13

  5. Duan, R.N., Zhu, J.Y., Lu, B.L.: Differential entropy feature for EEG-based emotion classification. In: 6th International IEEE/EMBS Conference on Neural Engineering (NER), pp. 81–84. IEEE (2013)

    Google Scholar 

  6. Elahi, M., Ricci, F., Rubens, N.: A survey of active learning in collaborative filtering recommender systems. Comput. Sci. Rev. 20, 29–50 (2016). https://doi.org/10.1016/j.cosrev.2016.05.002

    Article  MathSciNet  MATH  Google Scholar 

  7. Geetha, G., Safa, M., Fancy, C., Saranya, D.: A hybrid approach using collaborative filtering and content based filtering for recommender system. J. Phys. Conf. Ser. 1000(1), 012101 (2018). https://doi.org/10.1088/1742-6596/1000/1/012101

  8. Johri, R., Pandey, P., Miyapuram, K.P., Lomas, D.: Brain activity recognition using deep electroencephalography representation. In: 2023 IEEE Applied Sensing Conference (APSCON), pp. 1–3. IEEE (2023)

    Google Scholar 

  9. Kaneshiro, B., Nguyen, D.T., Dmochowski, J.P., Norcia, A.M., Berger, J.: Naturalistic music EEG dataset - hindi (nmed-h) (2014–2016). www.exhibits.stanford.edu/data/catalog/sd922db3535

  10. Lawhern, V.J., Solon, A.J., Waytowich, N.R., Gordon, S.M., Hung, C.P., Lance, B.J.: EEGNET: a compact convolutional neural network for EEG-based brain-computer interfaces. J. Neural Eng. 15(5), 056013 (2018). www.stacks.iop.org/1741-2552/15/i=5/a=056013

  11. Losorelli, S., Nguyen, D.T.T., Dmochowski, J.P., Kaneshiro, B.: Naturalistic music EEG dataset - tempo (nmed-t) (2017). www.exhibits.stanford.edu/data/catalog/jn859kj8079

  12. Miyapuram, K.P., Ahmad, N., Pandey, P., Lomas, J.D.: Electroencephalography (EEG) dataset during naturalistic music listening comprising different genres with familiarity and enjoyment ratings. Data Brief 45, 108663 (2022). https://doi.org/10.1016/j.dib.2022.108663. www.sciencedirect.com/science/article/pii/S235234092200868X

  13. Pandey, P., Ahmad, N., Miyapuram, K.P., Lomas, D.: Predicting dominant beat frequency from brain responses while listening to music. In: 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 3058–3064 (2021). https://doi.org/10.1109/BIBM52615.2021.9669750

  14. Pandey, P., Bedmutha, P.S., Miyapuram, K.P., Lomas, D.: Stronger correlation of music features with brain signals predicts increased levels of enjoyment. In: 2023 IEEE Applied Sensing Conference (APSCON), pp. 1–3. IEEE (2023)

    Google Scholar 

  15. Pandey, P., Gupta, P., Miyapuram, K.P.: Brain connectivity based classification of meditation expertise. In: Mahmud, M., Kaiser, M.S., Vassanelli, S., Dai, Q., Zhong, N. (eds.) BI 2021. LNCS (LNAI), vol. 12960, pp. 89–98. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86993-9_9

    Chapter  Google Scholar 

  16. Pandey, P., Miyapuram, K.P.: Nonlinear EEG analysis of mindfulness training using interpretable machine learning. In: 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 3051–3057. IEEE (2021)

    Google Scholar 

  17. Pandey, P., Rodriguez-Larios, J., Miyapuram, K.P., Lomas, D.: Detecting moments of distraction during meditation practice based on changes in the EEG signal. In: 2023 IEEE Applied Sensing Conference (APSCON), pp. 1–3. IEEE (2023)

    Google Scholar 

  18. Pandey, P., Sharma, G., Miyapuram, K.P., Subramanian, R., Lomas, D.: Music identification using brain responses to initial snippets. In: ICASSP 2022–2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1246–1250 (2022). https://doi.org/10.1109/ICASSP43922.2022.9747332

  19. Pandey, P., Swarnkar, R., Kakaria, S., Miyapuram, K.P.: Understanding consumer preferences for movie trailers from eeg using machine learning. arXiv preprint arXiv:2007.10756 (2020)

  20. Pandey, P., Tripathi, R., Miyapuram, K.P.: Classifying oscillatory brain activity associated with Indian rasas using network metrics. Brain Inf. 9(1), 1–20 (2022)

    Article  Google Scholar 

  21. Roy, Y., Banville, H., Albuquerque, I., Gramfort, A., Falk, T.H., Faubert, J.: Deep learning-based electroencephalography analysis: a systematic review - iopscience (2019). www.iopscience.iop.org/article/10.1088/1741-2552/ab260c

  22. Salehzadeh, A., Calitz, A.P., Greyling, J.: Human activity recognition using deep electroencephalography learning. Biomed. Signal Process. Control 62, 102094 (2020). https://doi.org/10.1016/j.bspc.2020.102094. www.sciencedirect.com/science/article/pii/S1746809420302500

  23. Sharma, G., Pandey, P., Subramanian, R., Miyapuram, K.P., Dhall, A.: Neural encoding of songs is modulated by their enjoyment (2022). https://doi.org/10.48550/ARXIV.2208.06679

  24. Sonawane, D., Miyapuram, K.P., Rs, B., Lomas, D.J.: Guessthemusic: song identification from electroencephalography response (2020). https://doi.org/10.48550/ARXIV.2009.08793

  25. Su, J., Wen, Z., Lin, T., Guan, Y.: Learning disentangled behaviour patterns for wearable-based human activity recognition. In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 6, no. 1, pp. 1–19 (2022). https://doi.org/10.1145/3517252

  26. Tibor, S.R., et al.: Deep learning with convolutional neural networks for EEG decoding and visualization. Human Brain Mapp. 38(11), 5391–5420 (2017). https://doi.org/10.1002/hbm.23730. www.onlinelibrary.wiley.com/doi/abs/10.1002/hbm.23730

  27. Waytowich, N., et al.: Compact convolutional neural networks for classification of asynchronous steady-state visual evoked potentials. J. Neural Eng. 15(6), 066031 (2018). www.stacks.iop.org/1741-2552/15/i=6/a=066031

  28. Zheng, W.L., Lu, B.L.: Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Trans. Auton. Ment. Dev. 7(3), 162–175 (2015). https://doi.org/10.1109/TAMD.2015.2431497

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Krishna Prasad Miyapuram .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Johri, R., Pandey, P., Miyapuram, K.P., Lomas, J.D. (2024). Decoding Individual and Shared Experiences of Media Perception Using CNN Architectures. In: Waiter, G., Lambrou, T., Leontidis, G., Oren, N., Morris, T., Gordon, S. (eds) Medical Image Understanding and Analysis. MIUA 2023. Lecture Notes in Computer Science, vol 14122. Springer, Cham. https://doi.org/10.1007/978-3-031-48593-0_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-48593-0_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-48592-3

  • Online ISBN: 978-3-031-48593-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics