ABSTRACT
This paper presents a novel perspective on human attention span modeling based on gaze estimation from head pose data extracted from videos. This is achieved by devising specialized 2D visualization plots that capture gaze progression and gaze sustenance over time. In doing so, a low-resolution analysis is assumed, as is the case with most crowd surveillance videos wherein the retinal analysis and iris pattern extraction of individual subjects is made impossible. The information is useful for studies involving the random gaze behavior pattern of humans in a crowded place, or in a controlled environment in seminars or office meetings. The extraction of useful information regarding the attention span of the individual from the spatial and temporal analysis of gaze points is the subject of study in this paper. Different solutions ranging from plotting temporal gaze plots to sustained attention span graphs are investigated, and the results are compared with the existing techniques of attention span modeling and visualization.
- Susan, Seba, and Roni Chakre. "3D-difference theoretic texture features for dynamic face recognition." In 2016 International conference on computational techniques in information and communication technologies (ICCTICT), pp. 227--232. IEEE, 2016.Google Scholar
- Rayner, Keith. "Eye movements and attention in reading, scene perception, and visual search." The quarterly journal of experimental psychology 62, no. 8 (2009): 1457--1506.Google Scholar
- Susan, Seba, and Madasu Hanmandlu. "Unsupervised detection of nonlinearity in motion using weighted average of non-extensive entropies." Signal, Image and Video Processing 9, no. 3 (2015): 511--525.Google ScholarCross Ref
- Okada, Genki, Kenta Masui, and Norimichi Tsumura. "Advertisement effectiveness estimation based on crowdsourced multimodal affective responses." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1263--1271. 2018.Google Scholar
- Fathi, Alireza, Yin Li, and James M. Rehg. "Learning to recognize daily actions using gaze." In European Conference on Computer Vision, pp. 314--327. Springer Berlin Heidelberg, 2012.Google Scholar
- Kretch, Kari S., and Karen E. Adolph. "Active vision in passive locomotion: real-world free viewing in infants and adults." Developmental science 18, no. 5 (2015): 736--750.Google ScholarCross Ref
- Chen, Monchu, Nelson Alves, and Ricardo Sol. "Combining Spatial and Temporal Information of Eye Movements in Goal-Oriented Tasks." In Human Factors in Computing and Informatics, pp. 827--830. Springer Berlin Heidelberg, 2013.Google ScholarCross Ref
- Susan, Seba, and Pooja Kadyan. "A supervised fuzzy eye pair detection algorithm." In 2013 5th International Conference and Computational Intelligence and Communication Networks, pp. 306--310. IEEE, 2013.Google Scholar
- Chen, Jiawei, Jonathan Wu, Kristi Richter, Janusz Konrad, and Prakash Ishwar. "Estimating head pose orientation using extremely low resolution images." In 2016 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI), pp. 65--68. IEEE, 2016.Google Scholar
- Zhang, Xucong, Yusuke Sugano, Mario Fritz, and Andreas Bulling. "It's written all over your face: Full-face appearance-based gaze estimation." In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 2299--2308. IEEE, 2017.Google Scholar
- Gourier, Nicolas, Daniela Hall, and James L. Crowley. "Estimating face orientation from robust detection of salient facial features." In ICPR International Workshop on Visual Observation of Deictic Gestures. 2004.Google Scholar
- Orozco, Javier, Shaogang Gong, and Tao Xiang. "Head Pose Classification in Crowded Scenes." In BMVC, vol. 1, p. 3. 2009.Google Scholar
- Stiefelhagen, Rainer. "Estimating head pose with neural networks-results on the pointing04 icpr workshop evaluation data." In Pointing'04 ICPR Workshop of the Int. Conf. on Pattern Recognition. 2004.Google Scholar
- Valenti, Roberto, Adel Lablack, Nicu Sebe, Chabane Djeraba, and Theo Gevers. "Visual gaze estimation by joint head and eye information." In Pattern Recognition (ICPR), 2010 20th International Conference on, pp. 3870--3873. IEEE, 2010.Google Scholar
- Ma, Zhong, Stephen Vickers, Howell Istance, Stephen Ackland, Xinbo Zhao, and Wenhu Wang. "What were we all looking at? Identifying objects of collective visual attention." Journal of Experimental & Theoretical Artificial Intelligence 28, no. 3 (2016): 547--560.Google ScholarCross Ref
Index Terms
- Human Attention Span Modeling using 2D Visualization Plots for Gaze Progression and Gaze Sustenance
Recommendations
Eye gaze and text line matching for reading analysis
UbiComp/ISWC'15 Adjunct: Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable ComputersEye tracking data has been widely used to analyze our reading behavior. Usually, experiments are carried out with head fixations or by analyzing eye tracking data in large areas such as paragraphs. But if we want to analyze the eye gaze line by line or ...
Visual Focus of Attention in Non-calibrated Environments using Gaze Estimation
Estimating the focus of attention of a person highly depends on her/his gaze directionality. Here, we propose a new method for estimating visual focus of attention using head rotation, as well as fuzzy fusion of head rotation and eye gaze estimates, in ...
Beyond gaze: preliminary analysis of pupil dilation and blink rates in an fMRI study of program comprehension
EMIP '18: Proceedings of the Workshop on Eye Movements in ProgrammingResearchers have been employing psycho-physiological measures to better understand program comprehension, for example simultaneous fMRI and eye tracking to validate top-down comprehension models. In this paper, we argue that there is additional value in ...
Comments