skip to main content
10.1145/3382507.3418850acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Finally on Par?! Multimodal and Unimodal Interaction for Open Creative Design Tasks in Virtual Reality

Published: 22 October 2020 Publication History

Abstract

Multimodal Interfaces (MMIs) have been considered to provide promising interaction paradigms for Virtual Reality (VR) for some time. However, they are still far less common than unimodal interfaces (UMIs). This paper presents a summative user study comparing an MMI to a typical UMI for a design task in VR. We developed an application targeting creative 3D object manipulations, i.e., creating 3D objects and modifying typical object properties such as color or size. The associated open user task is based on the Torrence Tests of Creative Thinking. We compared a synergistic multimodal interface using speech-accompanied pointing/grabbing gestures with a more typical unimodal interface using a hierarchical radial menu to trigger actions on selected objects. Independent judges rated the creativity of the resulting products using the Consensual Assessment Technique. Additionally, we measured the creativity-promoting factors flow, usability, and presence. Our results show that the MMI performs on par with the UMI in all measurements despite its limited flexibility and reliability. These promising results demonstrate the technological maturity of MMIs and their potential to extend traditional interaction techniques in VR efficiently.

Supplementary Material

MP4 File (3382507.3418850.mp4)
This paper presents a user study comparing an multimodal interface (MMI) to a typical unimodal interface (UMI) for a design task in VR. We developed an application targeting creative 3D object manipulations, i.e., creating 3D objects and modifying typical object properties such as color or size. We compared a synergistic multimodal interface using speech-accompanied pointing/grabbing gestures with a more typical unimodal interface using a hierarchical radial menu to trigger actions on selected objects. Independent judges rated the creativity of the resulting products using the ConsensualAssessment Technique. Our results show that the MMI performs on par with the UMI in all measurements despite its limited flexibility and reliability. These promising results demonstrate the technological maturity of MMIs and their potential to extend traditional interaction techniques in VR efficiently.
MP4 File (icmi1169.mp4)
Supplemental video

References

[1]
Teresa M Amabile. 1982. Social psychology of creativity: A consensual assessment technique. Journal of personality and social psychology 43, 5 (1982), 997.
[2]
Teresa M Amabile. 1983. The social psychology of creativity: A componential conceptualization. Journal of personality and social psychology 45, 2 (1983), 357.
[3]
Teresa M Amabile. 2018. Creativity in context: Update to the social psychology of creativity. Routledge.
[4]
Teresa M Amabile and BA Hennessey. 2011. Consensual assessment. In Encyclopedia of Creativity, Second Edition, Runco MA and Pritzker SR (Eds.). San Diego: Academic Press, vol. 1, pp. 253--260.
[5]
Albert G. Arnold. 1999. Mental Effort and Evaluation of User-Interfaces: A Questionnaire Approach. In Proceedings of HCI International (the 8th International Conference on Human-Computer Interaction) on Human-Computer Interaction: Ergonomics and User Interfaces-Volume I - Volume I. L. Erlbaum Associates Inc., USA, 1003--1007.
[6]
John Baer. 2011. Why grand theories of creativity distort, distract, and disappoint. The International Journal of Creativity & Problem Solving (2011).
[7]
John Baer and Sharon S McKool. 2009. Assessing creativity using the consensual assessment technique. In Handbook of research on assessment technologies, methods, and applications in higher education. IGI Global, 65--77.
[8]
Richard A. Bolt. 1980. 'Put-That-There': Voice and Gesture at the Graphics Interface. SIGGRAPH Comput. Graph. 14, 3 (July 1980), 262--270. https://doi.org/10.1145/965105.807503
[9]
Stéphane Bouchard, Geneviéve Robillard, Julie St-Jacques, Stéphanie Dumoulin, Marie-Josée Patry, and Patrice Renaud. 2004. Reliability and validity of a singleitem measure of presence in VR. In Proceedings of the Second International Conference on Creating, Connecting and Collaborating through Computing. IEEE, 59--61.
[10]
Stéphane Bouchard, Julie St-Jacques, Geneviève Robillard, and Patrice Renaud. 2008. Anxiety increases the feeling of presence in virtual reality. Presence: Teleoperators and Virtual Environments 17, 4 (2008), 376--391.
[11]
Jack Callahan, Don Hopkins, Mark Weiser, and Ben Shneiderman. 1988. An empirical comparison of pie vs. linear menus. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 95--100.
[12]
Ming Jin Cheok, Zaid Omar, and Mohamed Hisham Jaward. 2019. A review of hand gesture and sign language recognition techniques. International Journal of Machine Learning and Cybernetics 10, 1 (2019), 131--153.
[13]
Mihaly Csikszentmihalyi and Keith Sawyer. 2014. Shifting the Focus from Individual to Organizational Creativity. Springer Netherlands, Dordrecht, 67--71. https://doi.org/10.1007/978--94-017--9085--7_6
[14]
Mary Czerwinski, Eric Horvitz, and Edward Cutrell. 2001. Subjective duration assessment: An implicit probe for software usability. In Proceedings of IHM-HCI 2001 conference, Vol. 2. 167--170.
[15]
Raimund Dachselt and Anett Hübner. 2007. Three-dimensional menus: A survey and taxonomy. Computers & Graphics 31, 1 (2007), 53--65.
[16]
Kaushik Das and Christoph W Borst. 2010. An evaluation of menu properties and pointing techniques in a projection-based VR environment. In 2010 IEEE Symposium on 3D User Interfaces (3DUI). IEEE, 47--50.
[17]
Jing Du, Yangming Shi, Chao Mei, John Quarles, and Wei Yan. 2016. Communication by interaction: A multiplayer VR environment for building walkthroughs. In Construction Research Congress 2016. 2281--2290.
[18]
Bruno Dumas, Denis Lalanne, and Sharon Oviatt. 2009. Multimodal interfaces: A survey of principles, models and frameworks. Springer, 3--26.
[19]
Karin Eilers, Friedhelm Nachreiner, and Kerstin Hänecke. 1986. Entwicklung und Überprüfung einer Skala zur Erfassung subjektiv erlebter Anstrengung. Zeitschrift für Arbeitswissenschaft 4 (1986), 214--224.
[20]
Martin Fischbach. 2017. Enhancing Software Quality of Multimodal Interactive Systems. Publication.
[21]
Martin Fischbach, Dennis Wiebusch, and Marc Erich Latoschik. 2017. Semantic Entity-Component State Management Techniques to Enhance Software Quality for Multimodal VR-Systems. IEEE Transactions on Visualization and Computer Graphics (TVCG) 23, 4 (2017), 1342--1351.
[22]
Sascha Gebhardt, Sebastian Pick, Franziska Leithold, Bernd Hentschel, and Torsten Kuhlen. 2013. Extended pie menus for immersive virtual environments. IEEE transactions on visualization and computer graphics 19, 4 (2013), 644--651.
[23]
Jawaid A. Ghani and Satish P. Deshpande. 1994. Task characteristics and the experience of optimal flow in human?computer interaction. The Journal of Psychology 128, 4 (1994), 381--391.
[24]
Beth A. Hennessey and Teresa M. Amabile. 2010. Creativity. Annual Review of Psychology 61, 1 (2010), 569--598. https://doi.org/10.1146/annurev.psych.093008.
[25]
Arsalan Heydarian, Joao P Carneiro, David Gerber, Burcin Becerik-Gerber, Timothy Hayes, and Wendy Wood. 2015. Immersive virtual environments versus physical built environments: A benchmarking study for building design and userbuilt environment explorations. Automation in Construction 54 (2015), 116--126.
[26]
Jörn Hurtienne, Katharina Weber, and Lucienne Blessing. 2008. Prior experience and intuitive use: image schemas in user centred design. In Designing inclusive futures. Springer, 107--116.
[27]
Wijnand A IJsselsteijn, Yvonne AW de Kort, and Karolien Poels. 2013. The game experience questionnaire. Eindhoven: Technische Universiteit Eindhoven (2013), 3--9.
[28]
IrisVR Inc. 2019. Why This Public Utility Company Went from Unity & Unreal to Prospect for VR (And How Much It Saved Them). Retrieved April 21, 2020 from https://blog.irisvr.com/navisworks-vr-case-study
[29]
IrisVR Inc. 2020. IrisVR. Retrieved January 22, 2020 from https://irisvr.com/
[30]
Seung-A A. Jin. 2011. 'I feel present. Therefore, I experience flow:' A structural equation modeling approach to flow and presence in video games. Journal of Broadcasting & Electronic Media 55, 1 (2011), 114--136.
[31]
Seung-A A. Jin. 2012. 'Toward integrative models of flow': Effects of performance, skill, challenge, playfulness, and presence on flow in video games. Journal of Broadcasting & Electronic Media 56, 2 (2012), 169--186.
[32]
Ed Kaiser, Alex Olwal, David McGee, Hrvoje Benko, Andrea Corradini, Xiaoguang Li, Phil Cohen, and Steven Feiner. 2003. Mutual Disambiguation of 3D Multimodal Interaction in Augmented and Virtual Reality. In Proceedings of the 5th International Conference on Multimodal Interfaces. Association for Computing Machinery, New York, NY, USA, 12--19. https://doi.org/10.1145/958432.958438
[33]
James C Kaufman. 2012. Counting the muses: development of the Kaufman domains of creativity scale (K-DOCS). Psychology of Aesthetics, Creativity, and the Arts 6, 4 (2012), 298.
[34]
James C. Kaufman, John Baer, Jason C. Cole, and Janel D. Sexton. 2008. A Comparison of Expert and Nonexpert Raters Using the Consensual Assessment Technique. Creativity Research Journal 20, 2 (2008), 171--178. https://doi.org/10.1080/10400410802059929
[35]
James C. Kaufman and Vlad P. Glaveanu. 2019. A Review of Creativity Theories: What Questions Are We Trying to Answer? Cambridge University Press, 27--43. https://doi.org/10.1017/9781316979839.004
[36]
Radwa Khalil, Ben Godde, and Ahmed A. Karim. 2019. The Link Between Creativity, Cognition, and Creative Drives and Underlying Neural Mechanisms. Frontiers in Neural Circuits 13 (2019), 18. https://doi.org/10.3389/fncir.2019.00018
[37]
Kyung Hee Kim. 2006. Can we trust creativity tests? A review of the Torrance Tests of Creative Thinking (TTCT). Creativity research journal 18, 1 (2006), 3--14.
[38]
Gregory Kramida. 2016. Resolving the vergence-accommodation conflict in headmounted displays. IEEE transactions on visualization and computer graphics 22, 7 (2016), 1912--1931.
[39]
Denis Lalanne, Laurence Nigay, philippe Palanque, Peter Robinson, Jean Vanderdonckt, and Jean-François Ladry. 2009. Fusion Engines for Multimodal Input: A Survey. In Proceedings of the 2009 International Conference on Multimodal Interfaces. Association for Computing Machinery, New York, NY, USA, 153--160. https://doi.org/10.1145/1647314.1647343
[40]
Marc Erich Latoschik. 2001. A general framework for multimodal interaction in virtual reality systems: PrOSA. In The Future of VR and AR Interfaces-Multimodal, Humanoid, Adaptive and Intelligent. Proceedings of the Workshop at IEEE Virtual Reality. 21--25.
[41]
Marc Erich Latoschik. 2002. Designing Transition Networks for Multimodal VRInteractions Using a Markup Language. In Proceedings of the 4th IEEE International Conference on Multimodal Interfaces. IEEE Computer Society, USA, 411. https://doi.org/10.1109/ICMI.2002.1167030
[42]
Marc Erich Latoschik. 2005. A user interface framework for multimodal VR interactions. In Proceedings of the 7th international conference on Multimodal interfaces. 76--83.
[43]
Marc Erich Latoschik, Martin Fröhlich, Bernhard Jung, and Ipke Wachsmuth. 1998. Utilize Speech and Gestures to Realize Natural Interaction in a Virtual Environment. In IECON?98: Proceedings of the 24th annual Conference of the IEEE Industrial Electronics Society, Vol. 4. 2028--2033. http://trinity.inf.uni-bayreuth.de/download/usg_to_realize.pdf
[44]
Marc Erich Latoschik and Christian Fröhlich. 2007. Semantic Reflection for Intelligent Virtual Environments. In Proceedings of the IEEE VR 2007. 305--306. https: //downloads.hci.informatik.uni-wuerzburg.de/semantic-reflection-VR07.pdf
[45]
Marc Erich Latoschik and Christian Fröhlich. 2007. Towards Intelligent VR: MultiLayered Semantic Reflection for Intelligent Virtual Environments. In Proceedings of the Graphics and Applications GRAPP 2007. 249--259. https://downloads.hci.informatik.uni-wuerzburg.de/Towards-IVR-grapp07-latoschik.pdf
[46]
Marc Erich Latoschik and Henrik Tramberend. 2010. Short Paper: Engineering Realtime Interactive Systems: Coupling & Cohesion of Architecture Mechanisms. In Proceedings of the 16th Eurographics Conference on Virtual Environments & Second Joint Virtual Reality. Eurographics Association, 25--28.
[47]
M E. Latoschik and H Tramberend. 2011. Simulator X: A Scalable and Concurrent Architecture for Intelligent Realtime Interactive Systems. In Proceedings of the 2011 IEEE Virtual Reality Conference. IEEE Computer Society, USA, 171--174. https://doi.org/10.1109/VR.2011.5759457
[48]
Joseph J. LaViola Jr, Ernst Kruijff, Ryan P. McMahan, Doug Bowman, and Ivan P. Poupyrev. 2017. 3D user interfaces: theory and practice. Addison-Wesley Professional.
[49]
Joseph J. LaViola Jr., Ernst Kruijff, Ryan P. McMahan, Doug A. Bowman, and Ivan P. Poupyrev. 2017. 3D User Interfaces -- Theory and Practice (2nd ed.). Addison Wesley.
[50]
Richard E Mayer. 1999. 22 Fifty years of creativity research. Handbook of creativity 449 (1999).
[51]
Alexander S McKay, Maciej Karwowski, and James C Kaufman. 2017. Measuring the muses: validating the Kaufman domains of creativity scale (K-DOCS). Psychology of Aesthetics, Creativity, and the Arts 11, 2 (2017), 216.
[52]
Microsoft. 2009. Speech SDK. Retrieved January 21, 2020 from https://www.microsoft.com/en-us/download/details.aspx?id=10121
[53]
Jeanne Nakamura and Mihaly Csikszentmihalyi. 2014. The concept of flow. In Flow and the foundations of positive psychology. Springer, 239--263.
[54]
Ali Bou Nassif, Ismail Shahin, Imtinan Attili, Mohammad Azzeh, and Khaled Shaalan. 2019. Speech recognition using deep neural networks: A systematic review. IEEE Access 7 (2019), 19143--19165.
[55]
Anja Naumann and Jörn Hurtienne. 2010. Benchmarks for intuitive interaction with mobile devices. In Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. ACM, 401--402.
[56]
Anja Naumann, Jörn Hurtienne, Johann Habakuk Israel, Carsten Mohs, Martin Christof Kindsmüller, Herbert A Meyer, and Steffi Hußlein. 2007. Intuitive use of user interfaces: defining a vague concept. In International Conference on Engineering Psychology and Cognitive Ergonomics. Springer, 128--136.
[57]
Laurence Nigay and Joëlle Coutaz. 1993. A Design Space for Multimodal Systems: Concurrent Processing and Data Fusion. In Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 172--178. https://doi.org/10.1145/ 169059.169143
[58]
Sharon Oviatt. 1997. Multimodal Interactive Maps: Designing for Human Performance. Hum.-Comput. Interact. 12, 1 (March 1997), 93--129. https://doi.org/10.1207/s15327051hci1201&2_4
[59]
Sharon Oviatt. 2003. Advances in robust multimodal interface design. IEEE computer graphics and applications 5 (2003), 62--68.
[60]
Sharon Oviatt. 2006. Human-centered design meets cognitive load theory: designing interfaces that help people think. In Proceedings of the 14th ACM international conference on Multimedia. 871--880.
[61]
Sharon Oviatt. 2012. Multimodal interfaces. In The human-computer interaction handbook: Fundamentals, evolving technologies and emerging applications, 3rd Edition. Lawrence Erlbaum Assoc., Mahwah, NJ, 405--430.
[62]
Sharon Oviatt and Philip Cohen. 2000. Perceptual User Interfaces: Multimodal Interfaces That Process What Comes Naturally. Commun. ACM 43, 3 (March 2000), 45--53. https://doi.org/10.1145/330534.330538
[63]
Sharon Oviatt and Philip R Cohen. 2015. The paradigm shift to multimodality in contemporary computer interfaces. Synthesis lectures on human-centered informatics 8, 3 (2015), 1--243.
[64]
Sharon Oviatt, Rachel Coulston, and Rebecca Lunsford. 2004. When Do We Interact Multimodally? Cognitive Load and Multimodal Communication Patterns. In Proceedings of the 6th International Conference on Multimodal Interfaces. Association for Computing Machinery, New York, NY, USA, 129--136. https://doi.org/10.1145/1027933.1027957
[65]
Sharon Oviatt, Björn Schuller, Philip R. Cohen, Daniel Sonntag, Gerasimos Potamianos, and Antonio Krüger (Eds.). 2017. The Handbook of Multimodal-Multisensor Interfaces: Foundations, User Modeling, and Common Modality Combinations - Volume 1. Vol. 14. Association for Computing Machinery and Morgan & Claypool.
[66]
Thies Pfeiffer and Marc Erich Latoschik. 2004. Resolving Object References in multimodal Dialogues for Immersive Virtual Environments. In Proceedings of the IEEE Virtual Reality conference 2004. 35--42. http://trinity.inf.uni-bayreuth.de/download/Resolving_Object_References.pdf
[67]
Eeva M. Pilke. 2004. Flow experiences in information technology use. International journal of human-computer studies 61, 3 (2004), 347--357.
[68]
Jonathan A. Plucker, Matthew C. Makel, and Meihua Qian. 2019. Assessment of Creativity (2 ed.). Cambridge University Press, 44--68. https://doi.org/10.1017/9781316979839.005
[69]
Michael I. Posner. 1980. Orienting of attention. Quarterly journal of experimental psychology 32, 1 (1980), 3--25.
[70]
Ronald Rosenfeld, Dan Olsen, and Alex Rudnicky. 2001. Universal speech interfaces. interactions 8, 6 (2001), 34--44.
[71]
Mark A. Runco and Garrett J. Jaeger. 2012. The Standard Definition of Creativity. Creativity Research Journal 24, 1 (2012), 92--96. https://doi.org/10.1080/10400419.2012.650092 arXiv:https://doi.org/10.1080/10400419.2012.650092
[72]
A Santos, Telmo Zarraonandia, Paloma Díaz, and Ignacio Aedo. 2017. A Comparative Study of Menus in Virtual Reality Environments. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces. 294--299.
[73]
Dimitrios Saredakis, Ancret Szpak, Brandon Birckhead, Hannah Keage, Rizzo Albert, and Tobias Loetscher. 2020. Factors Associated With Virtual Reality Sickness in Head-Mounted Displays: A Systematic Review and Meta-Analysis. Frontiers in Human Neuroscience 14 (2020). https://doi.org/10.3389/fnhum.2020.00096
[74]
Prabir Sarkar, Amaresh Chakrabarti, et al. 2007. Development of a method for assessing design creativity. In Proceedings of ICED 2007, the 16th International Conference on Engineering Design. 349--350.
[75]
Rajeev Sharma, Vladimir I Pavlovic, and Thomas S Huang. 2002. Toward multimodal human--computer interface. In Advances in image processing and understanding: a festschrift for Thomas S Huang. World Scientific, 349--365.
[76]
Nicholas Stefanic and Clint Randles. 2015. Examining the reliability of scores from the consensual assessment technique in the measurement of individual and small group creativity. Music Education Research 17, 3 (2015), 278--295. https://doi.org/10.1080/14613808.2014.909398
[77]
Facebook Technologies. 2018. Oculus Rift S. Retrieved May 31, 2020 from https://www.oculus.com/rift-s/
[78]
Unity Technologies. 2017. Unity. Retrieved January 18, 2020 from https://unity3d.com/
[79]
Unity Technologies. 2020. Unity. Retrieved May 27, 2020 from https://docs.unity3d.com/Packages/[email protected]/manual/index.html
[80]
Unity Technologies. 2020. Unity. Retrieved May 27, 2020 from https://vrtoolkit.readme.io
[81]
Ellis Paul Torrance. 1966. Torrance tests of creative thinking: Norms-technical manual: Verbal tests, forms a and b: Figural tests, forms a and b. Personal Press, Incorporated.
[82]
John W Tukey. 1962. The future of data analysis. The annals of mathematical statistics 33, 1 (1962), 1--67.
[83]
Karl T. Ulrich and Steven D. Eppinger. 2004. Product architecture. Product design and development 3 (2004), 163--186.
[84]
VRTK. 2018. Virtual Reality Toolkit. Retrieved January 18, 2020 from https: //www.vrtk.io/
[85]
Erik Wolf, Sara Klüber, Chris Zimmerer, Jean-Luc Lugrin, and Marc Erich Latoschik. 2019. 'Paint that object yellow?: Multimodal Interaction to Enhance Creativity During Design Tasks in VR. In 2019 International Conference on Multimodal Interaction. ACM, 195--204.
[86]
Maliha Zaman, Murugan Anandarajan, and Qizhi Dai. 2010. Experiencing flow with instant messaging and its facilitating role on creative behaviors. Computers in Human Behavior 26, 5 (2010), 1009--1018.
[87]
Ferdinand Rudolf Hendrikus Zijlstra. 1993. Efficiency in work behaviour: A design approach for modern tools. (1993).
[88]
Chris Zimmerer, Martin Fischbach, and Marc Latoschik. 2018. Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks. Multimodal Technologies and Interaction 2, 4 (2018), 81.
[89]
Chris Zimmerer, Martin Fischbach, and Marc Erich Latoschik. 2018. Concurrent Augmented Transition Network ? Project Page. Retrieved August 28, 2020 from https://www.hci.uni-wuerzburg.de/projects/mmi/
[90]
Chris Zimmerer, Martin Fischbach, and Marc Erich Latoschik. 2018. Space Tentacles - Integrating Multimodal Input into a VR Adventure Game. In Proceedings of the 25th IEEE Virtual Reality (VR) conference. IEEE, 745--746. https://downloads.hci.informatik.uni-wuerzburg.de/2018-ieeevr-space-tentacle-preprint.pdf

Cited By

View all
  • (2024)An Artists' Perspectives on Natural Interactions for Virtual Reality 3D SketchingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642758(1-20)Online publication date: 11-May-2024
  • (2024)ReactGenie: A Development Framework for Complex Multimodal Interactions Using Large Language ModelsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642517(1-23)Online publication date: 11-May-2024
  • (2023)Design-thinking skill enhancement in virtual reality: A literature studyFrontiers in Virtual Reality10.3389/frvir.2023.11372934Online publication date: 4-Apr-2023
  • Show More Cited By

Index Terms

  1. Finally on Par?! Multimodal and Unimodal Interaction for Open Creative Design Tasks in Virtual Reality

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ICMI '20: Proceedings of the 2020 International Conference on Multimodal Interaction
      October 2020
      920 pages
      ISBN:9781450375818
      DOI:10.1145/3382507
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 22 October 2020

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. 3d user interfaces
      2. creativity
      3. design
      4. multimodal interaction
      5. speech and gesture
      6. user study
      7. virtual reality

      Qualifiers

      • Research-article

      Conference

      ICMI '20
      Sponsor:
      ICMI '20: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
      October 25 - 29, 2020
      Virtual Event, Netherlands

      Acceptance Rates

      Overall Acceptance Rate 453 of 1,080 submissions, 42%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)60
      • Downloads (Last 6 weeks)7
      Reflects downloads up to 01 Mar 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)An Artists' Perspectives on Natural Interactions for Virtual Reality 3D SketchingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642758(1-20)Online publication date: 11-May-2024
      • (2024)ReactGenie: A Development Framework for Complex Multimodal Interactions Using Large Language ModelsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642517(1-23)Online publication date: 11-May-2024
      • (2023)Design-thinking skill enhancement in virtual reality: A literature studyFrontiers in Virtual Reality10.3389/frvir.2023.11372934Online publication date: 4-Apr-2023
      • (2023)A Human-Computer Collaborative Editing Tool for Conceptual DiagramsProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580676(1-29)Online publication date: 19-Apr-2023
      • (2023)Real-time multimodal interaction in virtual reality - a case study with a large virtual interfaceMultimedia Tools and Applications10.1007/s11042-023-14381-682:16(25427-25448)Online publication date: 2-Feb-2023
      • (2022)Reducing the Cognitive Load of Playing a Digital Tabletop Game with a Multimodal InterfaceProceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3502062(1-13)Online publication date: 29-Apr-2022
      • (2022)A Case Study on the Rapid Development of Natural and Synergistic Multimodal Interfaces for XR Use-CasesCHI Conference on Human Factors in Computing Systems Extended Abstracts10.1145/3491101.3503552(1-8)Online publication date: 27-Apr-2022

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media