Skip to main content

A Cognitive User Interface for a Multi-modal Human-Machine Interaction

  • Conference paper
  • First Online:
Speech and Computer (SPECOM 2018)

Abstract

We developed a hardware-based cognitive user interface to help inexperienced and little technology-affine people to get easy access to smart home devices. The interface is able to interact (via speech, gestures, or touchscreen) with the user. By learning from the user’s behavior, it can adapt to each individual. In contrast to most commercial products, our solution keeps all data required for operation internally and is connected to other UCUI devices only via an encrypted wireless network. By design, no data ever leave the system to file servers of third-party service providers. In this way, we ensure the privacy protection of the user.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Duckhorn, F., Hoffmann, R.: Using context-free grammars for embedded speech recognition with weighted finite-state transducers. In: Proceedings of the 13th Annual Conference of the International Speech Communication Association (Interspeech 2012), Portland, OR, USA, pp. 1003–1006, September 2012

    Google Scholar 

  2. Duckhorn, F., Huber, M., Meyer, W., Jokisch, O., Tschöpe, C., Wolff, M.: Towards an autarkic embedded cognitive user interface. In: Lacerda, F. (ed.) Proceedings Interspeech 2017, 20–24 August 2017, Stockholm, pp. 3435–3436 (2017). https://doi.org/10.21437/Interspeech.2017

  3. Hoffmann, R., Eichner, M., Wolff, M.: Analysis of verbal and nonverbal acoustic signals with the Dresden UASR system. In: Esposito, A., Faundez-Zanuy, M., Keller, E., Marinaro, M. (eds.) Verbal and Nonverbal Communication Behaviours. LNCS (LNAI), vol. 4775, pp. 200–218. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-76442-7_18

    Chapter  Google Scholar 

  4. Watkins, C.J., Dayan, P.: Technical note: q-learning. Mach. Learn. 8(3), 279–292 (1992)

    MATH  Google Scholar 

  5. Wirsching, G., Wolff, M.: Semantische Dekodierung von Sprachsignalen am Beispiel einer Mikrofonfeldsteuerung. In: Hoffmann, R. (ed.) Elektronische Sprachsignalverarbeitung, 26–28 March 2014, Dresden, pp. 104–109 (2014)

    Google Scholar 

  6. Wirsching, G., Huber, M., Kölbl, C., Lorenz, R., Römer, R.: Semantic dialogue modeling. In: Esposito, A., Esposito, A.M., Vinciarelli, A., Hoffmann, R., Müller, V.C. (eds.) Cognitive Behavioural Systems. LNCS, vol. 7403, pp. 104–113. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-34584-5_8

    Chapter  Google Scholar 

  7. Wolff, M.: Unified approach to signal synthesis and recognition (UASR, 2000-today). https://www.b-tu.de/en/fg-kommunikationstechnik/research/projects/uasr. Accessed 7 July 2017

Download references

Acknowledgments

We thank the German Federal Ministry of Education and Research (BMBF) and project management VDI/VDE Innovation + Technik GmbH for their financial support (#16ES0297) and out partners Javox Solutions GmbH, XGraphic Ingenieurgesellschaft mbH and Agilion GmbH for their collaboration.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Constanze Tschöpe .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tschöpe, C., Duckhorn, F., Huber, M., Meyer, W., Wolff, M. (2018). A Cognitive User Interface for a Multi-modal Human-Machine Interaction. In: Karpov, A., Jokisch, O., Potapova, R. (eds) Speech and Computer. SPECOM 2018. Lecture Notes in Computer Science(), vol 11096. Springer, Cham. https://doi.org/10.1007/978-3-319-99579-3_72

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-99579-3_72

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-99578-6

  • Online ISBN: 978-3-319-99579-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics