Abstract
This paper presents a system for another input modality in a multimodal human-machine interaction scenario. In addition to other common input modalities, e.g. speech, we extract head gestures by image interpretation techniques based on machine learning algorithms to have a nonverbal and familiar way of interacting with the system. Our experimental evaluation proofs the capability of the presented approach to work in real-time and reliable.
Chapter PDF
Similar content being viewed by others
References
Beetz, M., Stulp, F., Radig, B., Bandouch, J., Blodow, N., Dolha, M., Fedrizzi, A., Jain, D., Klank, U., Kresse, I., Maldonado, A., Marton, Z., Mösenlechner, L., Ruiz, F., Rusu, R.B., Tenorth, M.: The assistive kitchen — a demonstration scenario for cognitive technical systems. In: IEEE 17th International Symposium on Robot and Human Interactive Communication (RO-MAN), Muenchen, Germany (2008) (Invited paper)
Homepage of Institute of Automatic Control Engineering (LSR), Technische Universität München, Munich, http://www.lsr.ei.tum.de/research/research-areas/robotics/murola-the-multi-robot-lab
Sosnowski, S., Kuhnlenz, K., Buss, M.: EDDIE - An Emotion-Display with Dynamic Intuitive Expressions. In: The 15th IEEE International Symposium on Robot and Human Interactive Communication. ROMAN 2006, University of Hertfordshire, Hatfield, United Kingdom, September 6-8, 2006, pp. 569–574 (2006)
Goebl, M., Färber, G.: A real-time-capable hard- and software architecture for joint image and knowledge processing in cognitive automobiles. In: Intelligent Vehicles Symposium, pp. 737–740 (June 2007)
Stiller, C., Färber, G., Kammel, S.: Cooperative cognitive automobiles. In: Intelligent Vehicles Symposium, pp. 215–220. IEEE, Los Alamitos (2007)
: Thuy, M., Göbl, M., Rattei, F., Althoff, M., Obermeier, F., Hawe, S., Nagel, R., Kraus, S., Wang, C., Hecker, F., Russ, M., Schweitzer, M., León, F.P., Diepold, K., Eberspächer, J., Heißing, B., Wünsche, H.J.: Kognitive automobile - neue konzepte und ideen des sonderforschungsbereiches/tr-28. In: Aktive Sicherheit durch Fahrerassistenz, Garching bei München, April 7-8 (2008)
Pantic, M., Rothkrantz, L.J.M.: Automatic analysis of facial expressions: The state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(12), 1424–1445 (2000)
Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Conference on Computer Vision and Pattern Recognition1, pp. 511–518 (2001)
Soriano, M., Huovinen, S., Martinkauppi, B., Laaksonen, M.: Skin Detection in Video under Changing Illumination Conditions. In: Proc. 15th International Conference on Pattern Recognition, Barcelona, Spain, pp. 839–842 (2000)
Vezhnevets, V., Sazonov, V., Andreeva, A.: A survey on pixel-based skin color detection techniques. In: Proc. Graphicon 2003, pp. 85–92 (2003)
Viola, P., Jones, M.J.: Robust real-time face detection. International Journal of Computer Vision (2004)
Wimmer, M., Stulp, F., Pietzsch, S., Radig, B.: Learning local objective functions for robust face model fitting. IEEE (PAMI) 30(8) (2008)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Gast, J. et al. (2009). Did I Get It Right: Head Gestures Analysis for Human-Machine Interactions. In: Jacko, J.A. (eds) Human-Computer Interaction. Novel Interaction Methods and Techniques. HCI 2009. Lecture Notes in Computer Science, vol 5611. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02577-8_19
Download citation
DOI: https://doi.org/10.1007/978-3-642-02577-8_19
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-02576-1
Online ISBN: 978-3-642-02577-8
eBook Packages: Computer ScienceComputer Science (R0)