skip to main content
10.1145/2931002.2931009acmconferencesArticle/Chapter ViewAbstractPublication PagessapConference Proceedingsconference-collections
research-article

Decoupling light reflex from pupillary dilation to measure emotional arousal in videos

Published: 22 July 2016 Publication History

Abstract

Predicting the exciting portions of a video is a widely relevant problem because of applications such as video summarization, searching for similar videos, and recommending videos to users. Researchers have proposed the use of physiological indices such as pupillary dilation as a measure of emotional arousal. The key problem with using the pupil to measure emotional arousal is accounting for pupillary response to brightness changes. We propose a linear model of pupillary light reflex to predict the pupil diameter of a viewer based only on incident light intensity. The residual between the measured pupillary diameter and the model prediction is attributed to the emotional arousal corresponding to that scene. We evaluate the effectiveness of this method of factoring out pupillary light reflex for the particular application of video summarization. The residual is converted into an exciting-ness score for each frame of a video. We show results on a variety of videos, and compare against ground truth as reported by three independent coders.

Supplementary Material

MOV File (p89-raiturkar.mov)

References

[1]
Arapakis, I., Konstas, I., and Jose, J. M. 2009. Using facial expressions and peripheral physiological signals as implicit indicators of topical relevance. In ACM International Conference on Multimedia (MM), 461--470.
[2]
Bailey, B. P., and Iqbal, S. T. 2008. Understanding changes in mental workload during execution of goal-directed tasks and its application for interruption management. ACM Transactions on Computer-Human Interaction (TOCHI) 14, 4, 21.
[3]
Bradley, M. M., Miccoli, L., Escrig, M. A., and Lang, P. J. 2008. The pupil as a measure of emotional arousal and autonomic activation. Psychophysiology 45, 4, 602--607.
[4]
Bradley, M. M. 2009. Natural selective attention: orienting and emotion. Psychophysiology 46, 1--11.
[5]
Ellis, C. J. 1981. The pupillary light reflex in normal subjects. British Journal of Ophthalmology 65, 11, 754--759.
[6]
Feng, S., Lei, Z., Yi, D., and Li, S. 2012. Online content-aware video condensation. In Computer Vision and Pattern Recognition (CVPR).
[7]
Gygli, M., Grabner, H., Riemenschneider, H., and Van Gool, L. 2014. Creating summaries from user videos. In European Conference on Computer Vision (ECCV).
[8]
Hess, E. H., and Polt, J. M. 1964. Pupil size in relation to mental activity during simple problem-solving. Science 143, 3611, 1190--1192.
[9]
Hoeks, B., and Levelt, W. J. 1993. Pupillary dilation as a measure of attention: A quantitative system analysis. Behavior Research Methods, Instruments, & Computers 25, 1, 16--26.
[10]
Katti, H., Yadati, K., Kankanhalli, M., and Tat-Seng, C. 2011. Affective video summarization and story board generation using pupillary dilation and eye gaze. In IEEE International Symposium on Multimedia.
[11]
Klingner, J., Kumar, R., and Hanrahan, P. 2008. Measuring the task-evoked pupillary response with a remote eye tracker. In Proceedings of the 2008 Symposium on Eye Tracking Research & Applications, ACM, New York, NY, USA, ETRA '08, 69--72.
[12]
Kolodyazhniy, V., Kreibig, S. D., Gross, J. J., Roth, W. T., and Wilhelm, F. H. 2011. An affective computing approach to physiological emotion specificity: Toward subject-independent and stimulus-independent classification of film-induced emotions. Psychophysiology 48, 7, 908--922.
[13]
Lang, P. J., B. M., and Cuthbert, B., 1997. Motivated attention: Affect, activation, and action.
[14]
Lang, P., Greenwald, M., Bradley, M., and Hamm, A. 1993. Looking at pictures: affective, facial, visceral, and behavioral reactions. Psychophysiology 30, 261--273.
[15]
Lang, P. J. 1979. A bioinformational theory of emotional imagery. Psychophysiology 16, 495--512.
[16]
Liu, F., Niu, Y., and Gleicher, M. 2009. Using web images for measuring video frame interestingness. In Twenty-first International Joint Conference on Artificial Intelligence (IJCAI 2009).
[17]
Loewenfeld, I. E., and Lowenstein, O. 1993. The pupil: anatomy, physiology, and clinical applications, vol. 2. Wiley-Blackwell.
[18]
Loewenfeld, I. E. 1958. Mechanisms of reflex dilatation of the pupil. Documenta Ophthalmologica 12, 1, 185--448.
[19]
Lu, Z., and Grauman, K. 2012. Story-driven summarization for egocentric video. In Computer Vision and Pattern Recognition (CVPR).
[20]
Marshall, S. P. 2002. The index of cognitive activity: Measuring cognitive workload. In Human factors and power plants, 2002. proceedings of the 2002 IEEE 7th conference on, IEEE, 7--5.
[21]
Mathur, A., Gehrmann, J., and Atchison, D. A. 2013. Pupil shape as viewed along the horizontal visual field. Journal of vision 13, 6, 3--3.
[22]
Money, A. G., and Agius, H. 2008. Video summarization: A conceptual framework and survey of the state of the art. Journal of Visual Communication and Image Representation 19, 2, 121--143.
[23]
Palinko, O., Kun, A. L., Shyrokov, A., and Heeman, P. 2010. Estimating cognitive load using remote eye tracking in a driving simulator. In Symposium on Eye Tracking Research & Applications (ETRA), 141--144.
[24]
Pamplona, V. F., Oliveira, M. M., and Baranoski, G. V. 2009. Photorealistic models for pupil light reflex and iridal pattern deformation. ACM Transactions on Graphics (TOG) 28, 4, 106.
[25]
Picard, R., Vyzas, E., and Healey, J. 2001. Toward machine emotional intelligence: Analysis of affective physiological state. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 23, 10.
[26]
Rasheed, Z., and Shah, M. 2002. Movie genre classification by exploiting audio-visual features of previews. In International Conference on Pattern Recognition, vol. 2, 1086--1089.
[27]
Soleymani, M., Larson, M., Pun, T., and Hanjalic, A. 2014. Corpus development for affective video indexing. IEEE Transactions on Multimedia 16, 4, 1075--1089.
[28]
Truong, B. T., and Venkatesh, S. 2007. Video abstraction: A systematic review and classification. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP) 3, 1.
[29]
Watson, A. B., and Yellott, J. I. 2012. A unified formula for light-adapted pupil size. Journal of Vision 12, 10, 12--12.
[30]
Yu, B., Ma, W.-Y., Nahrstedt, K., and Zhang, H.-J. 2003. Video summarization based on user log enhanced link analysis. In ACM International Conference on Multimedia.

Cited By

View all
  • (2024)Sweating the Details: Emotion Recognition and the Influence of Physical Exertion in Virtual Reality ExergamingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642611(1-21)Online publication date: 11-May-2024
  • (2023)Open-DPSM: An open-source toolkit for modeling pupil size changes to dynamic visual inputsBehavior Research Methods10.3758/s13428-023-02292-156:6(5605-5621)Online publication date: 11-Dec-2023
  • (2023)Physiological Signals and Affect as Predictors of Advertising EngagementSensors10.3390/s2315691623:15(6916)Online publication date: 3-Aug-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SAP '16: Proceedings of the ACM Symposium on Applied Perception
July 2016
149 pages
ISBN:9781450343831
DOI:10.1145/2931002
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 July 2016

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. eyetracking
  2. linear model
  3. video understanding

Qualifiers

  • Research-article

Conference

SAP '16
Sponsor:
SAP '16: ACM Symposium on Applied Perception 2016
July 22 - 23, 2016
California, Anaheim

Acceptance Rates

Overall Acceptance Rate 43 of 94 submissions, 46%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)73
  • Downloads (Last 6 weeks)2
Reflects downloads up to 30 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Sweating the Details: Emotion Recognition and the Influence of Physical Exertion in Virtual Reality ExergamingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642611(1-21)Online publication date: 11-May-2024
  • (2023)Open-DPSM: An open-source toolkit for modeling pupil size changes to dynamic visual inputsBehavior Research Methods10.3758/s13428-023-02292-156:6(5605-5621)Online publication date: 11-Dec-2023
  • (2023)Physiological Signals and Affect as Predictors of Advertising EngagementSensors10.3390/s2315691623:15(6916)Online publication date: 3-Aug-2023
  • (2023)A Human-Machine Collaborative Video Summarization Framework Using Pupillary Response Signals2023 6th International Conference on Information Communication and Signal Processing (ICICSP)10.1109/ICICSP59554.2023.10390562(334-342)Online publication date: 23-Sep-2023
  • (2022)Pupillary Light Reflex Correction for Robust Pupillometry in Virtual RealityProceedings of the ACM on Computer Graphics and Interactive Techniques10.1145/35307985:2(1-16)Online publication date: 17-May-2022
  • (2022)Is the avatar scared? Pupil as a perceptual cueComputer Animation and Virtual Worlds10.1002/cav.204033:2Online publication date: 3-Feb-2022
  • (2021)A privacy-preserving approach to streaming eye-tracking dataIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2021.306778727:5(2555-2565)Online publication date: May-2021
  • (2021)Eye MovementHuman Movements in Human-Computer Interaction (HCI)10.1007/978-3-030-90004-5_3(23-37)Online publication date: 2-Dec-2021
  • (2020)Eye-Tracking Analysis for Emotion RecognitionComputational Intelligence and Neuroscience10.1155/2020/29092672020Online publication date: 1-Jan-2020
  • (2020)Sequence Models in Eye Tracking: Predicting Pupil Diameter During LearningACM Symposium on Eye Tracking Research and Applications10.1145/3379157.3391653(1-3)Online publication date: 2-Jun-2020
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media