Skip to main content

Multimodal Fusion and Fission within W3C Standards for Nonverbal Communication with Blind Persons

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 8547))

Abstract

Multimodal fusion and multimodal fission are well known concepts for multimodal systems but have not been well integrated in current architectures to support collaboration of blind and sighted people. In this paper we describe our initial thoughts of multimodal dialog modeling in multiuser dialog settings employing multiple modalities based on W3C standards like the Multimodal Architecture and Interfaces.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Atrey, P.K., Hossain, M.A., Saddik, A.E., Kankanhalli, M.S.: Multimodal fusion for multimedia analysis: a survey. Multimedia systems 16(6), 345–379 (2010)

    Article  Google Scholar 

  2. Brock, A., Truillet, P., Oriola, B., Jouffrais, C.: Usage of multimodal maps for blind people: why and how. In: ACM International Conference on Interactive Tabletops and Surfaces, pp. 247–248. ACM (2010)

    Google Scholar 

  3. Brusk, J., Lager, T., Hjalmarsson, A., Wik, P.: Deal: dialogue management in scxml for believable game characters. In: Proceedings of the 2007 Conference on Future Play, pp. 137–144. ACM (2007)

    Google Scholar 

  4. Dourlens, S., Ramdane-Cherif, A., Monacelli, E.: Multi levels semantic architecture for multimodal interaction. Applied Intelligence, 1–14 (2013)

    Google Scholar 

  5. Landragin, F.: Physical, semantic and pragmatic levels for multimodal fusion and fission. In: Proceedings of the Seventh International Workshop on Computational Semantics (IWCS-7), pp. 346–350 (2007)

    Google Scholar 

  6. Manshad, M.S., Pontelli, E., Manshad, S.J.: Micoo (multimodal interactive cubes for object orientation): a tangible user interface for the blind and visually impaired. In: The Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 261–262. ACM (2011)

    Google Scholar 

  7. Oviatt, S.L., Cohen, P.R.: Multimodal interfaces that process what comes naturally. Communications of the ACM 43(3), 45–53 (2000)

    Article  Google Scholar 

  8. Pietrzak, T., Martin, B., Pecci, I., Saarinen, R., Raisamo, R., Järvi, J.: The micole architecture: multimodal support for inclusion of visually impaired children. In: Proceedings of the 9th International Conference on Multimodal Interfaces, pp. 193–200. ACM (2007)

    Google Scholar 

  9. Pölzer, S., Schnelle-Walka, D., Pöll, D., Heumader, P., Miesenberger, K.: Making brainstorming meetings accessible for blind users. In: Proceedings of the 12th European AAATE Conference (Pre-print 2013)

    Google Scholar 

  10. Rousseau, C., Bellik, Y., Vernier, F.: WWHT: Un modèle conceptuel pour la présentation multimodale d’information. In: Proceedings of the 17th International Conference on Francophone sur l’Interaction Homme-Machine, pp. 59–66. ACM (2005)

    Google Scholar 

  11. Rousseau, C., Bellik, Y., Vernier, F., Bazalgette, D.: A Framework for the Intelligent Multimodal Presentation of Information. Signal Processing 86(12), 3696–3713 (2006)

    Article  MATH  Google Scholar 

  12. Schnelle-Walka, D., Radomski, S., Mühlhäuser, M.: JVoiceXML as a Modality Component in the W3C Multimodal Architecture. Journal on Multimodal User Interfaces (April 2013)

    Google Scholar 

  13. Sigüenza Izquierdo, Á., Blanco Murillo, J.L., Bernat Vercher, J., Hernández Gómez, L.A.: Using scxml to integrate semantic sensor information into context-aware user interfaces. In: International Workshop on Semantic Sensor Web, in Conjunction with IC3K 2010, Telecomunicacion (2011)

    Google Scholar 

  14. Ward, J., Meijer, P.: Visual experiences in the blind induced by an auditory sensory substitution device. Consciousness and cognition 19(1), 492–500 (2010)

    Article  Google Scholar 

  15. Wilcock, G.: SCXML and voice interfaces. In: 3rd Baltic Conference on Human Language Technologies. Citeseer, Kaunas (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Schnelle-Walka, D., Radomski, S., Mühlhäuser, M. (2014). Multimodal Fusion and Fission within W3C Standards for Nonverbal Communication with Blind Persons. In: Miesenberger, K., Fels, D., Archambault, D., Peňáz, P., Zagler, W. (eds) Computers Helping People with Special Needs. ICCHP 2014. Lecture Notes in Computer Science, vol 8547. Springer, Cham. https://doi.org/10.1007/978-3-319-08596-8_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-08596-8_33

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-08595-1

  • Online ISBN: 978-3-319-08596-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics