Skip to main content

Gaze and Speech in Attentive User Interfaces

  • Conference paper
  • First Online:
Advances in Multimodal Interfaces — ICMI 2000 (ICMI 2000)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1948))

Included in the following conference series:

  • 1130 Accesses

Abstract

The trend toward pervasive computing necessitates finding and implementing appropriate ways for users to interact with devices. We believe the future of interaction with pervasive devices lies in attentive user interfaces, systems that pay attention to what users do so that they can attend to what users need. Such systems track user behavior, model user interests, and anticipate user desires and actions. In addition to developing technologies that support attentive user interfaces, and applications or scenarios that use attentive user interfaces, there is the problem of evaluating the utility of the attentive approach. With this last point in mind, we observed users in an “office of the future”, where information is accessed on displays via verbal commands. Based on users’ verbal data and eye-gaze patterns, our results suggest people naturally address individual devices rather than the office as a whole.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bolt, R. A. (1980). Put that there: Voice and gesture at the graphics interface. ACM Computer Graphics vn14 (3), 262–270.

    Google Scholar 

  2. Coen, M. H. (1998). Design principles for intelligent environments, In Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI’ 98). Madison, WI.

    Google Scholar 

  3. Hirsh, H., Coen, M.H., Mozer, M.C., Hasha, R. & others. (1999). Room service, AI-style. IEEE Intelligent Systems, 14(2), 8–19.

    Article  Google Scholar 

  4. Horvitz, E. Breese, J., Heckerman, D., Hovel, D., &. Rommelse, K. (1998). The Lumiere project: Bayesian user modeling for inferring the goals and needs of software users, in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence.

    Google Scholar 

  5. Jacob, R. J. K. (1993). Eye movement-based human-computer interaction techniques: Toward non-command interfaces. In Hartson, D. & Hix, (Eds.)., Advances in Human-Computer Interaction, Vol 4, pp. 151–180. Ablex: Norwood

    Google Scholar 

  6. Linton, F., Joy, D., & Schaefer, H. (1999). Building user and expert models by long-term observation of application usage, in Proceedings of the Seventh International Conference on User Modeling, 129–138.

    Google Scholar 

  7. Maglio, P. P, Barrett, R., Campbell, C. S., Selker, T. (2000) Suitor: An attentive information system, in Proceedings of the International Conference on Intelligent User Interfaces 2000.

    Google Scholar 

  8. Maglio, P. P. & Campbell, C. S. (2000). Tradeoffs in displaying peripheral information, in Proceedings of the Conference on Human Factors in Computing Systems (CHI 2000).

    Google Scholar 

  9. Norman, D. A. (1998). The invisible computer. Cambridge, MA: MIT Press.

    Google Scholar 

  10. Oviatt, S. & Cohen, P. (2000). Multimodal interfaces that process what comes naturally. Communications of the ACM, 43(3), 45–53.

    Article  Google Scholar 

  11. Selker, T. (1994). COACH: A teaching agent that learns, Communications of the ACM, 37(1), 92–99.

    Article  Google Scholar 

  12. Starker, I. & Bolt, R. A. A gaze-responsive self-disclosing display, in Proceedings of the Conference on Human Factors in Computing Systems, CHI’ 90, 1990, 3–9.

    Google Scholar 

  13. Turk, M. & Robertson, G. (2000). Perceptual user interfaces. Communications of the ACM, 43(3), 33–34.

    Article  Google Scholar 

  14. Zhai, S., Morimoto, C., & Ihde, S. (1999). Manual and gaze input cascaded (MAGIC) pointing, in Proceedings of the Conference on Human Factors in Computing Systems (CHI 1999), 246–253.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Maglio, P.P., Matlock, T., Campbell, C.S., Zhai, S., Smith, B.A. (2000). Gaze and Speech in Attentive User Interfaces. In: Tan, T., Shi, Y., Gao, W. (eds) Advances in Multimodal Interfaces — ICMI 2000. ICMI 2000. Lecture Notes in Computer Science, vol 1948. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-40063-X_1

Download citation

  • DOI: https://doi.org/10.1007/3-540-40063-X_1

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-41180-2

  • Online ISBN: 978-3-540-40063-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics