Abstract
In this paper, we investigate the influence of facial parameters on the subjective impression that is created when looking at photographs containing people in the context of keyframe extraction from home video. Hypotheses about the influence of investigated parameters on the impression are experimentally validated with respect to a given viewing perspective. Based on the findings from a conducted user experiment, we propose a novel human-centric image scoring method based on weighted face parameters. As a novelty to the field of keyframe extraction, the proposed method considers facial expressions besides other parameters. We evaluate its effectiveness in terms of correlation between the image score and a ground truth user impression score. The results show that the consideration of facial expressions in the proposed method improves the correlation compared to image scores that rely on commonly used face parameters such as size and location.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Truong, B.T., Venkatesh, S.: Video abstraction: A systematic review and classification. ACM TOMCCAP 3, 1 (2007)
Gale, A.: Human response to visual stimuli. In: Hendee, W., Wells, P. (eds.) The Perception of Visual Information, pp. 127–147. Springer, Heidelberg (1997)
Senders, J.: Distribution of attention in static and dynamic scenes. SPIE 3016, 186–194 (1997)
Russ, J.C.: The Image Processing Handbook. CRC Press, Boca Raton (2006)
Park, S.C., Park, M.K., Kang, M.G.: Super-resolution image reconstruction: a technical overview. IEEE Signal Processing Magazine 20(3), 21–36 (2003)
Fergus, R., Singh, B., Hertzmann, A., Roweis, S.T., Freeman, W.T.: Removing camera shake from a single photograph. In: ACM SIGGRAPH 2006, pp. 787–794 (2006)
Zhang, T.: Intelligent Keyframe Extraction for Video Printing. In: Proc. of SPIE Conference on Internet Multimedia Management Systems V, vol. 5601, pp. 25–35 (2004)
Luo, J., Papin, C., Costello, K.: Towards Extracting Semantically Meaningful Key Frames From Personal Video Clips: From Humans to Computers. IEEE Trans. Circuits Syst. Video Techn. 19(2), 289–301 (2009)
Martinet, J., Satoh, S., Chiaramella, Y., Mulhem, P.: Media objects for user-centered similarity matching. Multimedia Tools Appl. 39(2), 263–291 (2008)
Ekman, P., Keltner, D.: Universal facial expressions of emotion. In: Segerstrale, U., Molnar, P. (eds.) Nonverbal Communication, pp. 27–46. LEA, Mahwah (1997)
Kowalik, U., Hidaka, K., Irie, G., Kojima, A.: Creating joyful digests by exploiting smile/laughter facial expressions present in video. In: International Workshop on Advanced Image Technology (2009)
ClipLife, http://cliplife.goo.ne.jp/
Lim, J.H., Tian, Q., Mulhem, P.: Home photo content modeling for personalized event-based retrieval. IEEE Multimedia 10(4), 28–37 (2003)
Wiskott, L., Fellous, J.-M., Kruger, N., von der Malsburg, C.: Face Recognition by Elastic Bunch Graph Matching. IEEE Trans. PAMI 19(7), 775–779 (1997)
Ando, S., Suzuki, A., Takahashi, Y., Yasuno, T.: A Fast Object Detection and Recognition Algorithm Based on Joint Probabilistic ISC. In: MIRU 2007 (2007) (in Japanese)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kowalik, U., Irie, G., Miyazaki, Y., Kojima, A. (2010). Facial Parameters and Their Influence on Subjective Impression in the Context of Keyframe Extraction from Home Video Contents. In: Boll, S., Tian, Q., Zhang, L., Zhang, Z., Chen, YP.P. (eds) Advances in Multimedia Modeling. MMM 2010. Lecture Notes in Computer Science, vol 5916. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-11301-7_11
Download citation
DOI: https://doi.org/10.1007/978-3-642-11301-7_11
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-11300-0
Online ISBN: 978-3-642-11301-7
eBook Packages: Computer ScienceComputer Science (R0)