skip to main content
10.1145/3328756.3328758acmotherconferencesArticle/Chapter ViewAbstractPublication PagescasaConference Proceedingsconference-collections
short-paper

Design of Seamless Multi-modal Interaction Framework for Intelligent Virtual Agents in Wearable Mixed Reality Environment

Published: 01 July 2019 Publication History

Abstract

In this paper, we present the design of a multimodal interaction framework for intelligent virtual agents in wearable mixed reality environments, especially for interactive applications at museums, botanical gardens, and similar places. These places need engaging and no-repetitive digital content delivery to maximize user involvement. An intelligent virtual agent is a promising mode for both purposes. Premises of framework is wearable mixed reality provided by MR devices supporting spatial mapping. We envisioned a seamless interaction framework by integrating potential features of spatial mapping, virtual character animations, speech recognition, gazing, domain-specific chatbot and object recognition to enhance virtual experiences and communication between users and virtual agents. By applying a modular approach and deploying computationally intensive modules on cloud-platform, we achieved a seamless virtual experience in a device with limited resources. Human-like gaze and speech interaction with a virtual agent made it more interactive. Automated mapping of body animations with the content of a speech made it more engaging. In our tests, the virtual agents responded within 2-4 seconds after the user query. The strength of the framework is flexibility and adaptability. It can be adapted to any wearable MR device supporting spatial mapping.

References

[1]
M. Alencar and J. Netto. 2011. Improving cooperation in Virtual Learning Environments using multi-agent systems and AIML. Frontiers in Education Conference (FIE). Rapid City.
[2]
M. Anabuki, H. Kakuta, H. Yamamoto, and H. Tamura. 2000. Welbo: an embodied conversational agent living in mixed reality space. In CHI EA '00 CHI '00 Extended Abstracts on Human Factors in Computing Systems (CHI '00). The Hague, The Netherlands (2000), 10--11.
[3]
J. Arroyo-Palacios and R. Marks. 2017. POSTER Believable Virtual Characters for Mixed Reality. 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct). Nantes (2017), 121--123.
[4]
V. Avramova, F. Yang, C. Li, C. Peters, and G. Skantze. 2017. A Virtual Poster Presenter Using Mixed Reality. Intelligent Virtual Agents (2017), 25--28.
[5]
A. Bogdanovych, T. Trescak, and S. Simoff. 2015. Formalising Believability and Building Believable Virtual Agents. Lecture Notes in Computer Science (2015), 142--156.
[6]
L. Cai, B. Liu, J. Yu, and J. Zhang. 2015. Human behaviors modeling in multi-agent virtual environment. Multimedia Tools and Applications 76, 4 (2015), 5851--5871.
[7]
R. Castano, G. Dotto, R. Suma, A. Martina, and A. Bottino. 2015. VIRTUALME: A Library for Smart Autonomous Agents in Multiple Virtual Environments. Communications in Computer and Information Science (2015), 34--45.
[8]
V. Catterson, E. Davidson, and S. McArthur. 2012. Practical applications of multi-agent systems in electric power systems. European Transactions on Electrical Power 22, 2 (2012), 235--252.
[9]
F. Freigang and S. Kopp. 2016. This Is What's Important -- Using Speech and Gesture to Create Focus in Multimodal Utterance. Intelligent Virtual Agents (2016), 96--109.
[10]
J. Kolkmeier, J. Vroon, and D. Heylen. {n.d.}. Interacting with Virtual Agents in Shared Space: Single and Joint Effects of Gaze and Proxemics. Intelligent Virtual Agents ({n. d.}), 1--14.
[11]
S. Kopp and et al. {n.d.}. Max - A multimodal assistant in virtual reality construction. KI - K u nstliche Intelligenz 4 ({n. d.}), 11--17.
[12]
W. Liu, O. Fernando, A. Cheok, J. Wijesena, and R. Tan. 2007. Science Museum Mixed Reality Digital Media Exhibitions for Children. Second Workshop on Digital Media and its Application in Museum Heritages (DMAMH 2007 (2007), 389--394.
[13]
O. M. Machidon, M. Duguleana, and M. Carrozzino. 2018. Virtual humans in cultural heritage ICT applications: A review. Journal of Cultural Heritage 33 (2018), 249--260.
[14]
S. McArthur, E. Davidson, V. Catterson, and et al. 2007. Multi-agent systems for power engineering applications âĂŞ part I: concepts, approaches, and technical challenges. IEEE Transactions on Power Systems 22, 4 (2007), 1743--1752.
[15]
S. McArthur, E. Davidson, V. Catterson, and et al. 2007. Multi-agent systems for power engineering applications âĂŞ part II: technologies, standards, and tools for building multi-agent systems. IEEE Transactions on Power Systems 22, 4 (2007), 1753--1759.
[16]
C. Rebman, M. Aiken, and C. Cegielski. 2003. Speech recognition in the human-- computer interface. Information Management 40, 6 (2003), 509--519.
[17]
D. Richards. 2012. Agent-based museum and tour guides. (2012).
[18]
S. Uchino, N. Abe, K. Tanaka, T. Yagi, H. Taki, and S. He. 2007. VR Interaction in Real-Time between Avatar with Voice and Gesture Recognition System. Advanced Information Networking and Applications Workshops (AINAW '0 7 (2007), 959--964.
[19]
S. Vosinakis, N. Avradinis, and P. Koutsabasis. 2017. Dissemination of Intangible Cultural Heritage Using a Multi-agent Virtual World. Lecture Notes in Computer Science (2017), 197--207.
[20]
R. Xiao, J. Schwarz, N. Throm, A. Wilson, and H. Benko. 2018. MRTouch: Adding Touch Input to Head-Mounted Mixed Reality. IEEE Transactions on Visualization and Computer Graphics 24, 4 (2018), 2018.
[21]
J. Xie and C. Liu. 2017. Multi-agent systems and their applications. Journal of International Council on Electrical Engineering 7, 1 (2017), 188--197.
[22]
B. Yumak, Z. van den Brink and A. Egges. 2017. Autonomous Social Gaze Model for an Interactive Virtual Character in Real-Life Settings. Computer Animation and Virtual Worlds 28 (2017), 3--4.

Cited By

View all
  • (2024)Research on the Application of Extended Reality in the Construction and Management of Landscape EngineeringElectronics10.3390/electronics1305089713:5(897)Online publication date: 26-Feb-2024
  • (2024)EMiRAs-Empathic Mixed Reality AgentsProceedings of the 3rd Empathy-Centric Design Workshop: Scrutinizing Empathy Beyond the Individual10.1145/3661790.3661791(1-7)Online publication date: 11-May-2024
  • (2024)Designing EEPO: An Emissary Educator Playmate Oracle XR Conversation Agent for Children2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)10.1109/VRW62533.2024.00031(151-156)Online publication date: 16-Mar-2024
  • Show More Cited By

Index Terms

  1. Design of Seamless Multi-modal Interaction Framework for Intelligent Virtual Agents in Wearable Mixed Reality Environment

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    CASA '19: Proceedings of the 32nd International Conference on Computer Animation and Social Agents
    July 2019
    95 pages
    ISBN:9781450371599
    DOI:10.1145/3328756
    © 2019 Association for Computing Machinery. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

    In-Cooperation

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 01 July 2019

    Permissions

    Request permissions for this article.

    Check for updates

    Qualifiers

    • Short-paper
    • Research
    • Refereed limited

    Conference

    CASA '19

    Acceptance Rates

    Overall Acceptance Rate 18 of 110 submissions, 16%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)102
    • Downloads (Last 6 weeks)22
    Reflects downloads up to 03 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Research on the Application of Extended Reality in the Construction and Management of Landscape EngineeringElectronics10.3390/electronics1305089713:5(897)Online publication date: 26-Feb-2024
    • (2024)EMiRAs-Empathic Mixed Reality AgentsProceedings of the 3rd Empathy-Centric Design Workshop: Scrutinizing Empathy Beyond the Individual10.1145/3661790.3661791(1-7)Online publication date: 11-May-2024
    • (2024)Designing EEPO: An Emissary Educator Playmate Oracle XR Conversation Agent for Children2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)10.1109/VRW62533.2024.00031(151-156)Online publication date: 16-Mar-2024
    • (2024)Voice-Augmented Virtual Reality Interface for Serious Games2024 IEEE Conference on Games (CoG)10.1109/CoG60054.2024.10645616(1-8)Online publication date: 5-Aug-2024
    • (2023)Chatbots in Museums: Is Visitor Experience Measured?Czech Journal of Tourism10.2478/cjot-2022-000211:1-2(14-31)Online publication date: 5-Jan-2023
    • (2023)Framing Seamlessness-Enhancing Future Multimodal Interaction from Physical to Virtual SpacesCompanion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces10.1145/3626485.3626549(79-81)Online publication date: 5-Nov-2023
    • (2023)Tell Me Where To Go: Voice-Controlled Hands-Free Locomotion for Virtual Reality Systems2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR55154.2023.00028(123-134)Online publication date: Mar-2023
    • (2023)AR Intelligent Real-time Method for Cultural Heritage Object Recognition2023 IEEE 5th International Conference on Advanced Information and Communication Technologies (AICT)10.1109/AICT61584.2023.10452426(62-66)Online publication date: 21-Nov-2023
    • (2023)Design and user experience analysis of AR intelligent virtual agents on smartphonesCognitive Systems Research10.1016/j.cogsys.2022.11.00778(33-47)Online publication date: Mar-2023
    • (2023)Multi-modal interaction using time division long-term evolution (TD-LTE) for space designing exhibitionWireless Networks10.1007/s11276-023-03427-029:8(3625-3636)Online publication date: 27-Jun-2023
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media