skip to main content
10.1145/3136755.3136768acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Modelling fusion of modalities in multimodal interactive systems with MMMM

Published:03 November 2017Publication History

ABSTRACT

Several models and design spaces have been defined and are regularly used to describe how modalities can be fused together in an interactive multimodal system. However, models such as CASE, the CARE properties or TYCOON have all been defined more than two decades ago. In this paper, we start with a critical review of these models, which notably highlighted a confusion between how the user and the system side of a multimodal system were described. Based on this critical review, we define MMMM v1, an improved model for the description of multimodal fusion in interactive systems targeting completeness. A first user evaluation comparing the models revealed that MMMM v1 was indeed complete, but at the cost of user friendliness. Based on the results of this first evaluation, an improved version of MMMM, called MMMM v2 was defined. A second user evaluation highlighted that this model achieved a good balance between complexity, consistency and completeness compared to the state of the art.

References

  1. Richard A. Bolt. 1980. “Put-that-there”: Voice and Gesture at the Graphics Interface. In Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 1980). Seattle, USA, 262–270. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Andrea Cherubini, Robin Passama, Philippe Fraisse, and André Crosnier. 2015. A Unified Multimodal Control Framework for Human-Robot Interaction. Robotics and Autonomous Systems 70 (2015), 106–115. Modelling Fusion of Modalities in Multimodal... ICMI’17, November 13–17, 2017, Glasgow, UK Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Joëlle Coutaz, Laurence Nigay, Daniel Salber, Ann Blandford, Jon May, and Richard M. Young. 1995. Four Easy Pieces for Assessing the Usability of Multimodal Interaction: The CARE Properties. In Proceedings of the 5th International Conference on Human-Computer Interaction (Interact 1995). Lillehammer, Norway, 115–120.Google ScholarGoogle Scholar
  4. Fredy Cuenca, Jan Van den Bergh, Kris Luyten, and Karin Coninx. 2015. Hasselt UIMS: A Tool for Describing Multimodal Interactions with Composite Events. In Proceedings of the 7th ACM SIGCHI Symposium on Engineering Interactive Computing Systems. Duisburg, Germany, 226–229. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Bruno Dumas, Rolf Ingold, and Denis Lalanne. 2009. Benchmarking Fusion Engines of Multimodal Interactive Systems. In Proceedings of the 2009 International Conference on Multimodal Interfaces. Cambridge, Massachusetts, USA, 169–176. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Bruno Dumas, Denis Lalanne, and Rolf Ingold. 2010. Description Languages for Multimodal Interaction: A Set of Guidelines and its Illustration with SMUIML. Journal on Multimodal User Interfaces: “Special Issue on The Challenges of Engineering Multimodal Interaction” 3, 3 (February 2010), 237–247.Google ScholarGoogle ScholarCross RefCross Ref
  7. Bruno Dumas, Denis Lalanne, and Sharon Oviatt. 2009. Multimodal Interfaces: A Survey of Principles, Models and Frameworks. In Human Machine Interaction: Research Results of the MMI Program. Springer-Verlag, Berlin, Heidelberg, 3–26. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Lode Hoste, Bruno Dumas, and Beat Signer. 2011. Mudra: A Unified Multimodal Interaction Framework. In Proceedings of the 13th International Conference on Multimodal Interfaces (ICMI 2011). Alicante, Spain, 97–104. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Marc Erich Latoschik. 2005. A User Interface Framework for Multimodal VR Interactions. In Proceedings of the 7th International Conference on Multimodal Interfaces. ACM, Torento, Italy, 76–83. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Jean-Claude Martin. 1998. TYCOON: Theoretical Framework and Software Tools for Multimodal Interfaces. Intelligence and Multimodality in Multimedia interfaces (1998), 1–25.Google ScholarGoogle Scholar
  11. Jean-Claude Martin and Dominique Béroule. 1999. TYCOON: Six Primitive Types of Cooperation for Observing, Evaluating and Specifying Cooperations. In Proceedings of the AAAI Fall 1999 Symposium on Psychological Models of Communication in Collaborative Systems, Vol. 16.Google ScholarGoogle Scholar
  12. David R. McGee, Philip R. Cohen, and Lizhong Wu. 2000. Something from Nothing: Augmenting a Paper-based Work Practice via Multimodal Interaction. In Proceedings of DARE 2000 on Designing Augmented Reality Environments. Elsinore, Denmark, 71–80. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Laurence Nigay. 1994. Conception et modélisation logicielles des systèmes interactifs: application aux interfaces multimodales. Ph.D. Dissertation. Université Joseph-Fourier-Grenoble I.Google ScholarGoogle Scholar
  14. Laurence Nigay. 2004. Design Space for Multimodal Interaction. In Building the Information Society. Springer, 403–408.Google ScholarGoogle Scholar
  15. Laurence Nigay and Joëlle Coutaz. 1993. A Design Space for Multimodal Systems: Concurrent Processing and Data Fusion. In Proceedings of the SIGCHI conference on Human factors in computing systems (CHI 1993). 172–178. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Donald A Norman. 2013. The design of everyday things: Revised and expanded edition. Basic books.Google ScholarGoogle Scholar
  17. Sharon Oviatt. 1999. Ten myths of multimodal interaction. Commun. ACM 42, 11 (1999), 74–81. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. M. Serrano, L. Nigay, J.Y. Lawson, A. Ramsay, R. Murray-Smith, and S. Denef. 2008. The OpenInterface Framework: A Tool for Multimodal Interaction. In Proceedings of the 26th International Conference on Human Factors in Computing Systems (CHI 2008). Florence, Italy, 3501–3506. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Modelling fusion of modalities in multimodal interactive systems with MMMM

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          ICMI '17: Proceedings of the 19th ACM International Conference on Multimodal Interaction
          November 2017
          676 pages
          ISBN:9781450355438
          DOI:10.1145/3136755

          Copyright © 2017 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 3 November 2017

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          ICMI '17 Paper Acceptance Rate65of149submissions,44%Overall Acceptance Rate453of1,080submissions,42%
        • Article Metrics

          • Downloads (Last 12 months)10
          • Downloads (Last 6 weeks)0

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader