Skip to main content

Using Visual Cues to Leverage the Use of Speech Input in the Vehicle

  • Conference paper
  • First Online:
Persuasive Technology (PERSUASIVE 2018)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10809))

Included in the following conference series:

Abstract

Touch and speech input often exist side-by-side in multimodal systems. Speech input has a number of advantages over touch, which are especially relevant in safety critical environments such as driving. However, information on large screens tempts drivers to use touch input for interaction. They lack an effective trigger, which reminds them that speech input might be the better choice. This work investigates the efficacy of visual cues to leverage the use of speech input while driving. We conducted a driving simulator experiment with 45 participants that examined the influence of visual cues, task type, driving scenario, and audio signals on the driver’s choice of modality, glance behavior and subjective ratings. The results indicate that visual cues can effectively promote speech input, without increasing visual distraction, or restricting the driver’s freedom to choose. We propose that our results can be applied to other applications such as smartphones or smart home applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.faytech.com/de/katalog/product/101-capacitive-touch-monitor-ft10wtmbcap/.

  2. 2.

    http://de.rode.com/microphones/smartlav.

  3. 3.

    http://www.ergoneers.com/eye-tracking.

References

  1. Bradford, J.H., James, H.: The human factors of speech-based interfaces. ACM SIGCHI Bull. 27(2), 61–67 (1995)

    Article  Google Scholar 

  2. Cohen, J.: A power primer. Psychol. Bull. 112(1), 155–159 (1992)

    Article  Google Scholar 

  3. Dillard, J.P., Shen, L.: On the nature of reactance and its role in persuasive health communication. Commun. Monogr. 72(2), 144–168 (2005)

    Article  Google Scholar 

  4. Fogg, B.J.: A behavior model for persuasive design. In: Proceedings of the 4th International Conference on Persuasive Technology, p. 1. ACM Press, New York (2009)

    Google Scholar 

  5. Fogg, B.J.: Creating persuasive technologies: an eight-step design process. In: Proceedings of the 4th International Conference on Persuasive Technology, vol. 91, pp. 1–6 (2009)

    Google Scholar 

  6. Kamm, C.: User interfaces for voice applications. Proc. Nat. Acad. Sci. 92(22), 10031–10037 (1995)

    Article  Google Scholar 

  7. Miranda, B., Jere, C., Alharbi, O., Lakshmi, S., Khouja, Y., Chatterjee, S.: Examining the efficacy of a persuasive technology package in reducing texting and driving behavior. In: Berkovsky, S., Freyne, J. (eds.) PERSUASIVE 2013. LNCS, vol. 7822, pp. 137–148. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-37157-8_17

    Chapter  Google Scholar 

  8. Parush, A.: Speech-based interaction in multitask conditions: impact of prompt modality. Hum. Factors 47(3), 591–597 (2005)

    Article  Google Scholar 

  9. Reeves, L.M., Martin, J.-C., McTear, M., Raman, T.V., Stanney, K.M., Su, H., Wang, Q.Y., Lai, J., Larson, J.A., Oviatt, S., Balaji, T.S., Buisine, S., Collings, P., Cohen, P., Kraal, B.: Guidelines for multimodal user interface design. Commun. ACM 47(1), 57–59 (2004)

    Article  Google Scholar 

  10. Roider, F., Rümelin, S., Pfleging, B., Gross, T.: The effects of situational demands on gaze, speech and gesture input in the vehicle. In: Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 94–102. ACM Press, New York (2017)

    Google Scholar 

  11. Steindl, C., Jonas, E., Sittenthaler, S., Traut-Mattausch, E., Greenberg, J.: Understanding psychological reactance: new developments and findings. J. Psychol. 223(4), 205–214 (2015)

    Google Scholar 

  12. Strayer, D.L., Drews, F.A., Crouch, D.J.: A comparison of the cell phone driver and the drunk driver. Hum. Factors 48(2), 381–391 (2006)

    Article  Google Scholar 

  13. Teichner, W.H., Krebs, M.J.: Laws of visual choice reaction time. Psychol. Rev. 81(1), 75–98 (1974)

    Article  Google Scholar 

  14. Wickens, C.D., Sandry, D.L., Vidulich, M.: Compatibility and resource competition between modalities of input, central processing, and output. Hum. Factors 25(2), 227–248 (1983)

    Article  Google Scholar 

  15. Yankelovich, N.: How do users know what to say? Interactions 3(6), 32–43 (1996)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Florian Roider .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Roider, F., Rümelin, S., Gross, T. (2018). Using Visual Cues to Leverage the Use of Speech Input in the Vehicle. In: Ham, J., Karapanos, E., Morita, P., Burns, C. (eds) Persuasive Technology. PERSUASIVE 2018. Lecture Notes in Computer Science(), vol 10809. Springer, Cham. https://doi.org/10.1007/978-3-319-78978-1_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-78978-1_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-78977-4

  • Online ISBN: 978-3-319-78978-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics