Abstract
We developed a hardware-based cognitive user interface to help inexperienced and little technology-affine people to get easy access to smart home devices. The interface is able to interact (via speech, gestures, or touchscreen) with the user. By learning from the user’s behavior, it can adapt to each individual. In contrast to most commercial products, our solution keeps all data required for operation internally and is connected to other UCUI devices only via an encrypted wireless network. By design, no data ever leave the system to file servers of third-party service providers. In this way, we ensure the privacy protection of the user.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Duckhorn, F., Hoffmann, R.: Using context-free grammars for embedded speech recognition with weighted finite-state transducers. In: Proceedings of the 13th Annual Conference of the International Speech Communication Association (Interspeech 2012), Portland, OR, USA, pp. 1003–1006, September 2012
Duckhorn, F., Huber, M., Meyer, W., Jokisch, O., Tschöpe, C., Wolff, M.: Towards an autarkic embedded cognitive user interface. In: Lacerda, F. (ed.) Proceedings Interspeech 2017, 20–24 August 2017, Stockholm, pp. 3435–3436 (2017). https://doi.org/10.21437/Interspeech.2017
Hoffmann, R., Eichner, M., Wolff, M.: Analysis of verbal and nonverbal acoustic signals with the Dresden UASR system. In: Esposito, A., Faundez-Zanuy, M., Keller, E., Marinaro, M. (eds.) Verbal and Nonverbal Communication Behaviours. LNCS (LNAI), vol. 4775, pp. 200–218. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-76442-7_18
Watkins, C.J., Dayan, P.: Technical note: q-learning. Mach. Learn. 8(3), 279–292 (1992)
Wirsching, G., Wolff, M.: Semantische Dekodierung von Sprachsignalen am Beispiel einer Mikrofonfeldsteuerung. In: Hoffmann, R. (ed.) Elektronische Sprachsignalverarbeitung, 26–28 March 2014, Dresden, pp. 104–109 (2014)
Wirsching, G., Huber, M., Kölbl, C., Lorenz, R., Römer, R.: Semantic dialogue modeling. In: Esposito, A., Esposito, A.M., Vinciarelli, A., Hoffmann, R., Müller, V.C. (eds.) Cognitive Behavioural Systems. LNCS, vol. 7403, pp. 104–113. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-34584-5_8
Wolff, M.: Unified approach to signal synthesis and recognition (UASR, 2000-today). https://www.b-tu.de/en/fg-kommunikationstechnik/research/projects/uasr. Accessed 7 July 2017
Acknowledgments
We thank the German Federal Ministry of Education and Research (BMBF) and project management VDI/VDE Innovation + Technik GmbH for their financial support (#16ES0297) and out partners Javox Solutions GmbH, XGraphic Ingenieurgesellschaft mbH and Agilion GmbH for their collaboration.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Tschöpe, C., Duckhorn, F., Huber, M., Meyer, W., Wolff, M. (2018). A Cognitive User Interface for a Multi-modal Human-Machine Interaction. In: Karpov, A., Jokisch, O., Potapova, R. (eds) Speech and Computer. SPECOM 2018. Lecture Notes in Computer Science(), vol 11096. Springer, Cham. https://doi.org/10.1007/978-3-319-99579-3_72
Download citation
DOI: https://doi.org/10.1007/978-3-319-99579-3_72
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-99578-6
Online ISBN: 978-3-319-99579-3
eBook Packages: Computer ScienceComputer Science (R0)