Skip to main content

Interaction History in Adaptive Multimodal Interaction

  • Chapter
  • First Online:
  • 782 Accesses

Part of the book series: Cognitive Technologies ((COGTECH))

Abstract

Modern Companion-Technologies provide multimodal and adaptive interaction possibilities. However, it is still unclear which user characteristics should be used in which manner to optimally support the interaction. An important aspect is that users themselves learn and adapt their behavior and preferences based on their own experiences. In other words, certain characteristics of user behavior are slowly but continuously changed and updated by the users themselves over multiple encounters with the Companion-Technology. Thus, a biological adaptive multimodal system observes and interacts with an electronic one, and vice versa. Consequently, such a user-centered interaction history is essential and should be integrated in the prediction of user behavior. Doing so enables the Companion to achieve more robust predictions of user behavior, which in turn leads to better fusion decisions and more efficient customization of the UI. We present the development of an experimental paradigm based on visual search tasks. The setup allows the induction of various user experiences as well as the testing of their effects on user behavior and preferences during multimodal interaction.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Ailomaa, M., Melichar, M., Rajman, M., Lisowska, A., Armstrong, S.: Archivus: a multimodal system for multimedia meeting browsing and retrieval. In: Proceedings of the COLING/ACL on Interactive Presentation Sessions, pp. 49–52. Association for Computational Linguistics, Stroudsburg, PA (2006)

    Google Scholar 

  2. Ajzen, I.: Attitudes, Personality, and Behavior. McGraw-Hill International, Maidenhead (2005)

    Google Scholar 

  3. Bellik, Y., Rebaï, I., Machrouh, E., Barzaj, Y., Jacquet, C., Pruvost, G., Sansonnet, J.P.: Multimodal interaction within ambient environments: an exploratory study. In: Human-Computer Interaction–INTERACT 2009, pp. 89–92. Springer, Berlin (2009)

    Google Scholar 

  4. Biundo, S., Wendemuth, A.: Companion-technology for cognitive technical systems. Künstl. Intell. 30(1), 71–75 (2016). Special issue on companion technologies

    Google Scholar 

  5. Bolt, R.A.: “Put-That-There”: Voice and Gesture at the Graphics Interface, vol. 14. ACM, New York (1980)

    Google Scholar 

  6. Camp, G., Paas, F., Rikers, R., van Merrienboer, J.: Dynamic problem selection in air traffic control training: a comparison between performance, mental effort and mental efficiency. Comput. Hum. Behav. 17(5), 575–595 (2001)

    Article  Google Scholar 

  7. Carter, S., Mankoff, J., Klemmer, S.R., Matthews, T.: Exiting the cleanroom: on ecological validity and ubiquitous computing. Hum. Comput. Interact. 23(1), 47–99 (2008)

    Article  Google Scholar 

  8. Cohen, P.R., Johnston, M., McGee, D., Oviatt, S., Pittman, J., Smith, I., Chen, L., Clow, J.: Quickset: multimodal interaction for distributed applications. In: Proceedings of the Fifth ACM International Conference on Multimedia, pp. 31–40. ACM, New York (1997)

    Google Scholar 

  9. De Angeli, A., Gerbino, W., Cassano, G., Petrelli, D.: Visual display, pointing, and natural language: the power of multimodal interaction. In: Proceedings of the Working Conference on Advanced Visual Interfaces, pp. 164–173. ACM, New York (1998)

    Google Scholar 

  10. Dey, P., Madhvanath, S., Ranjan, A., Das, S.: An exploration of gesture-speech multimodal patterns for touch interfaces. In: Proceedings of the 3rd International Conference on Human Computer Interaction, pp. 79–83. ACM, New York (2011)

    Google Scholar 

  11. Domjan, M.: The principles of learning and behavior. Cengage Learning, Stamford, CT (2014)

    Google Scholar 

  12. Dumas, B., Lalanne, D., Oviatt, S.: Multimodal interfaces: a survey of principles, models and frameworks. In: Lalanne, D., Kohlas, J. (eds.) Human Machine Interaction. Lecture Notes in Computer Science, vol. 5440, pp. 3–26. Springer, Berlin, Heidelberg (2009)

    Chapter  Google Scholar 

  13. Haas, E.C., Pillalamarri, K.S., Stachowiak, C.C., McCullough, G.: Temporal binding of multimodal controls for dynamic map displays: a systems approach. In: Proceedings of the 13th International Conference on Multimodal Interfaces, pp. 409–416. ACM, New York (2011)

    Google Scholar 

  14. Hart, S.G., Staveland, L.E.: Development of NASA-TLX (task load index): results of empirical and theoretical research. Adv. Psychol. 52, 139–183 (1988)

    Article  Google Scholar 

  15. Hollender, N., Hofmann, C., Deneke, M., Schmitz, B.: Integrating cognitive load theory and concepts of human–computer interaction. Comput. Hum. Behav. 26(6), 1278–1288 (2010)

    Article  Google Scholar 

  16. Jaimes, A., Sebe, N.: Multimodal human–computer interaction: a survey. Comput. Vis. Image Underst. 108(1), 116–134 (2007)

    Article  Google Scholar 

  17. Jöst, M., Häußler, J., Merdes, M., Malaka, R.: Multimodal interaction for pedestrians: an evaluation study. In: Proceedings of the 10th International Conference on Intelligent User Interfaces, pp. 59–66. ACM, New York (2005)

    Google Scholar 

  18. Käster, T., Pfeiffer, M., Bauckhage, C.: Combining speech and haptics for intuitive and efficient navigation through image databases. In: Proceedings of the 5th International Conference on Multimodal Interfaces, pp. 180–187. ACM, New York (2003)

    Google Scholar 

  19. Kieffer, S., Carbonell, N.: How really effective are multimodal hints in enhancing visual target spotting? Some evidence from a usability study. J. Multimodal User Interfaces 1(1), 1–5 (2007)

    Article  Google Scholar 

  20. Koons, D.B., Sparrell, C.J., Thorisson, K.R.: Integrating Simultaneous Input from Speech, Gaze, and Hand Gestures. MIT, Menlo Park, CA, pp. 257–276 (1993)

    Google Scholar 

  21. Kruijff-Korbayová, I., Blaylock, N., Gerstenberger, C., Rieser, V., Becker, T., Kaisser, M., Poller, P., Schehl, J.: An experiment setup for collecting data for adaptive output planning in a multimodal dialogue system. In: Proceedings of ENLG (2005)

    Google Scholar 

  22. Lalanne, D., Nigay, L., Robinson, P., Vanderdonckt, J., Ladry, J.F., et al.: Fusion engines for multimodal input: a survey. In: Proceedings of the 2009 International Conference on Multimodal Interfaces, pp. 153–160. ACM, New York (2009)

    Google Scholar 

  23. Lee, M., Billinghurst, M.: A wizard of oz study for an ar multimodal interface. In: Proceedings of the 10th International Conference on Multimodal Interfaces, pp. 249–256. ACM, New York (2008)

    Google Scholar 

  24. Lee, J.H., Poliakoff, E., Spence, C.: The effect of multimodal feedback presented via a touch screen on the performance of older adults. In: Haptic and Audio Interaction Design, pp. 128–135. Springer, Berlin (2009)

    Google Scholar 

  25. Mignot, C., Valot, C., Carbonell, N.: An experimental study of future “natural” multimodal human-computer interaction. In: INTERACT’93 and CHI’93 Conference Companion on Human Factors in Computing Systems, pp. 67–68. ACM, New York (1993)

    Google Scholar 

  26. Müller, H.J., Krummenacher, J.: Visual search and selective attention. Vis. Cogn. 14(4–8), 389–410 (2006)

    Article  Google Scholar 

  27. Neal, J.G., Shapiro, S.C.: Intelligent multi-media interface technology. ACM SIGCHI Bull. 20(1), 75–76 (1988)

    Article  Google Scholar 

  28. Ouellette, J.A., Wood, W.: Habit and intention in everyday life: the multiple processes by which past behavior predicts future behavior. Psychol. Bull. 124(1), 54 (1998)

    Article  Google Scholar 

  29. Oviatt, S.: Multimodal interactive maps: designing for human performance. Hum. Comput. Interact. 12(1), 93–129 (1997)

    Article  Google Scholar 

  30. Oviatt, S., Cohen, P., Wu, L., Vergo, J., Duncan, L., Suhm, B., Bers, J., Holzman, T., Winograd, T., Landay, J., Larson, J., Ferro, D.: Designing the user interface for multimodal speech and pen-based gesture applications: state-of-the-art systems and future research directions. Hum. Comput. Interact. 15(4), 263–322 (2000)

    Article  Google Scholar 

  31. Oviatt, S., Coulston, R., Lunsford, R.: When do we interact multimodally?: Cognitive load and multimodal communication patterns. In: Proceedings of the 6th International Conference on Multimodal Interfaces, pp. 129–136. ACM, New York (2004)

    Google Scholar 

  32. Oviatt, S., Lunsford, R., Coulston, R.: Individual differences in multimodal integration patterns: what are they and why do they exist? In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 241–249. ACM, New York (2005)

    Google Scholar 

  33. Oviatt, S., Arthur, A., Cohen, J.: Quiet interfaces that help students think. In: Proceedings of the 19th Annual ACM Symposium on User Interface Software and Technology, pp. 191–200. ACM, New York (2006)

    Google Scholar 

  34. Paas, F., Tuovinen, J.E., Tabbers, H., Van Gerven, P.W.: Cognitive load measurement as a means to advance cognitive load theory. Educ. Psychol. 38(1), 63–71 (2003)

    Article  Google Scholar 

  35. Paivio, A.: Mental Representations: A Dual Coding Approach. Oxford University Press, Oxford (1990)

    Book  Google Scholar 

  36. Ratzka, A.: Explorative studies on multimodal interaction in a PDA- and desktop-based scenario. In: Proceedings of the 10th International Conference on Multimodal Interfaces, pp. 121–128. ACM, New York (2008)

    Google Scholar 

  37. Reeves, L.M., Lai, J., Larson, J.A., Oviatt, S., Balaji, T.S., Buisine, S., Collings, P., Cohen, P., Kraal, B., Martin, J.C., McTear, M., Raman, T., Stanney, K.M., Su, H., Wang, Q.Y.: Guidelines for multimodal user interface design. Commun. ACM 47(1), 57–59 (2004)

    Article  Google Scholar 

  38. Reis, T., de Sá, M., Carriço, L.: Multimodal interaction: real context studies on mobile digital artefacts. In: Haptic and Audio Interaction Design, pp. 60–69. Springer, Berlin (2008)

    Google Scholar 

  39. Ren, X., Zhang, G., Dai, G.: An experimental study of input modes for multimodal human-computer interaction. In: Advances in Multimodal Interfaces-ICMI 2000, pp. 49–56. Springer, Berlin (2000)

    Google Scholar 

  40. Ruiz, N., Chen, F., Oviatt, S.: Multimodal input. In: Multimodal Signal Processing: Theory and Applications for Human-Computer Interaction, p. 231. Academic, Boston (2009)

    Google Scholar 

  41. Savidis, A., Stephanidis, C.: Unified user interface design: designing universally accessible interactions. Interact. Comput. 16(2), 243–270 (2004)

    Article  Google Scholar 

  42. Schüssel, F., Honold, F., Weber, M.: Influencing factors on multimodal interaction during selection tasks. J. Multimodal User Interfaces 7(4), 299–310 (2013)

    Article  Google Scholar 

  43. Schüssel, F., Honold, F., Weber, M., Schmidt, M., Bubalo, N., Huckauf, A.: Multimodal interaction history and its use in error detection and recovery. In: Proceedings of the 16th ACM International Conference on Multimodal Interaction, ICMI ’14, pp. 164–171. ACM, New York (2014). doi:10.1145/2663204.2663255

    Google Scholar 

  44. Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cogn. Psychol. 12(1), 97–136 (1980)

    Article  Google Scholar 

  45. Van Merrienboer, J.J., Sweller, J.: Cognitive load theory and complex learning: recent developments and future directions. Educ. Psychol. Rev. 17(2), 147–177 (2005)

    Article  Google Scholar 

  46. Wasinger, R., Krüger, A.: Modality preferences in mobile and instrumented environments. In: Proceedings of the 11th International Conference on Intelligent User Interfaces, pp. 336–338. ACM, New York (2006)

    Google Scholar 

  47. Wickens, C.D.: Multiple resources and mental workload. Hum. Factors J. Hum. Factors Ergon. Soc. 50(3), 449–455 (2008)

    Article  Google Scholar 

Download references

Acknowledgements

This work was done within the Transregional Collaborative Research Centre SFB/TRR 62 “Companion-Technology for Cognitive Technical Systems” funded by the German Research Foundation (DFG).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Felix Schüssel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Bubalo, N., Schüssel, F., Honold, F., Weber, M., Huckauf, A. (2017). Interaction History in Adaptive Multimodal Interaction. In: Biundo, S., Wendemuth, A. (eds) Companion Technology. Cognitive Technologies. Springer, Cham. https://doi.org/10.1007/978-3-319-43665-4_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-43665-4_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-43664-7

  • Online ISBN: 978-3-319-43665-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics