Skip to content
Publicly Available Published by Oldenbourg Wissenschaftsverlag April 1, 2022

QuarantivityVR: Supporting Self-Embodiment for Non-HMD Users in Asymmetric Social VR Games

  • Amal Yassien

    Amal Yassien is a PhD. student and an assistant lecturer at the German International University in Cairo (GIU). She obtained both her M. Sc. (September 2021) and B. Sc. (July 2017) thesis in Computer Science and Engineering from the German University in Cairo (GUC). Her masters’ topic was about enhancing social VR experiences via augmenting nonverbal cues. Her current research interest lies in exploring the future of social augmented reality platforms and enhancing users’ social experience within these platforms.

    EMAIL logo
    , Mohamed Ahmed Soliman

    Mohamed Ahmed Soliman is B. Sc. holder from the faculty of Media Engineering and Technology at the German University in Cairo (GUC). Mohamed is currently doing his premasters in Computer Science and Engineering at the GUC Berlin Campus. The work presented in this manuscript is his bachelor thesis project.

    and Slim Abdennadher

    Slim Abdennadher is the President of the German International University and a Full Professor in Computer Science, Faculty of Informatics and Computer Science, German International University (GIU) in New Administrative Capital, Egypt. Prof. Dr. Abdennadher earned his doctoral degree in 1998 and the habilitation followed in 2001 from the Ludwig-Maximilians-University Munich, Germany. He conducted his Bachelor as well as Master degrees at the University of Kaiserslautern, Germany. For the period between 2011 and 2021, Prof. Dr. Abdennadher held the position of Vice President for Academic and Research Affairs at the German University. In addition to being the Acting Dean of the Faculty of Media Engineering and Technology as well as the study dean for all engineering faculties from 2004. For his period of appointment, he was also responsible for many duties like heading the examination committee, International Accreditation, Research Committee and many others. His research focuses on constraint programming, human computer interaction, serious games and affective computing at which he authored three books and more than 120 peer-reviewed publications. Personally, he participated in many numerous prestigious international conferences and academic workshops whether as general chair, part of the organizing committee, or as member of the academic program committee. He has also brought to his research groups many research funds through projects that tackle different areas of research.

From the journal i-com

Abstract

The prevalence of immersive head-mounted display (HMD) social virtual reality (VR) applications introduced asymmetric interaction among users within the virtual environment (VE). Therefore, researchers opted for (1) exploring the asymmetric social VR interaction dynamics in only co-located setups, (2) assigning interdependent roles to both HMD and non-HMD users, and (3) representing non-HMD users as abstract avatars in the VE. Therefore, we investigate the feasibility of supporting Self-Embodiment in an asymmetric VR interaction mode in a remote setup. To this end, we designed an asymmetric social VR game, QuarantivityVR, to (1) support sense of self-embodiment for non-HMD users in a remote setting by representing them as realistic full-body avatars within the VE, (2) augment visual-motor synchrony for the non-HMD users to increase their sense of agency and presence by detecting their motion through Kinect sensor and laptop’s webcam. During the game, each player performs three activities in succession, namely movie-guessing, spelling-bee, and answering mathematical questions. We believe that our work will act as a step towards the inclusion of a wide spectrum of users that can not afford full immersion and will aid researchers in creating enjoyable interactions for both users in the physical and virtual spaces.

1 Introduction

Asymmetric VR interactions became prevalent in 2012, after the emergence of the head-mounted display (HMD) based virtual reality (VR) applications [21]. They are often utilized when there is a large group of physically co-located bystanders and only one accessible HMD (e. g. living room environment, remote visits to concert halls and gyms) [7], [21]. The emergence of this mode of interactions renewed interests in various well-explored design parameters used in social VR development, namely action parameters associated with gestures and synchrony in order to include non-HMD users in the virtual environment (see literature review [34]). Consequently, related work investigated and developed various interfaces to include the non-HMD user. For instance, Gugenheimer et al. developed asymmetric prototypes, “ShareVR” and “FaceDisplay”, [7], [8] that supported asymmetric social interactions within a living room and offered new input systems for the non-HMD user to engage in the VE (e. g. display-mounted controller and touch-screen-bounded HMDs). However, the representation of the users were abstract and their mobility within the VE was limited. Moreover, the roles assigned to the non-HMD within the virtual environment often require the granularity of mouse/keyboard or hand-held controllers [21]. However, the current research direction is going toward the usage of natural hand gestures in social interactions and gaming within social VR platforms [3], [4]. Therefore, providing visual-motor synchrony for non-HMD users by transferring their movement in the real world to the virtual one is an ongoing research endeavor in the literature of virtual reality [34].

To bridge this gap, we designed an asymmetric social VR game, QuarantivityVR, to establish a sense of self-embodiment for the non-HMD users by (1) representing the non-HMD user via a full realistic avatar and (2) supporting full-body tracking via Kinect sensor and the laptop’s webcam. QuarantivityVR is adopted from an Egyptian board-game “Quarantivity” that emerged during the recent COVID-19 pandemic. It supports three main activities, namely (1) movie-guessing using only body gestures, (2) solving simple mathematical riddles, and (3) spelling some words (Spelling-Bee). Due to the current pandemic, we designed the game to support both remote and co-located settings to give users the ability to play in a safe environment. We targeted activities that rely on memory, cognition, and physical gestures in order to test the full effect of the user representation along with the sense of agency provided by tracking their movement in the real world. Our results show that users feel more present and involved when using VR than they do when using webcam or kinect. Moreover, users playing the game using kinect experienced higher social presence (engagement) than those playing using VR. This implies that high social presence can be achieved via minimal levels of presence. We believe that our findings will aid in integrating asymmetric VR as part of users’ daily activities, where they can visit cultural places (e. g. concert halls, heritage sites) and participate in plethora of activities (e. g. gaming), as we offer both VR and non-VR users high engagement and social presence within the VE. We envision that our work will aid in creating enjoyable social virtual experiences that includes users that can not afford full immersion due to health or economic reasons. We believe that our work is a step towards enhancing the non-HMD user’s presence and social presence within virtual environments.

Figure 1 
            Figure A shows the an office scene where the users perform the spelling-bee and answer the mathematical questions. Figure B shows the garden where the users’ are initially instantiated in to start the game and play the movie-guessing activity. Figure C shows two users (HMD and non-HMD) meanwhile performing the movie-guessing activity.
Figure 1

Figure A shows the an office scene where the users perform the spelling-bee and answer the mathematical questions. Figure B shows the garden where the users’ are initially instantiated in to start the game and play the movie-guessing activity. Figure C shows two users (HMD and non-HMD) meanwhile performing the movie-guessing activity.

2 Background and Related Work

In this section, we will reflect on (1) the social psychology literature to shed the light on the presence and social presence theory and (2) the virtual reality (VR) and human-computer interaction (HCI) literature to investigate prior work’s endeavors in designing asymmetric VR interactions in terms of the type of user representation used and the type of synchrony or input systems supported.

2.1 Psychological Background

2.1.1 Self-Embodiment in Social VEs

The sense of self-embodiment is associated with the user’s subjective feeling of “owning” their avatars [22], [34]. Mennecke et al. [18] argues that embodiment is better established through applying the activity theory framework. Activity theory sees existence as an engagement goal and socially driven activities that are conveyed via contexts, tools, and symbols. On a similar note, Pan et al. [22] listed the common technical design parameters needed in the virtual environment to support Self-Embodiment. Designers opt for establishing visual-proprioception synchrony and visual-motor synchrony to provide the user’s the means to interact with the virtual environment in the same manner they do in the real world [34]. The visual-proprioception (rendering body parts where they are expected) and visual-motor (mirroring the user’s movement in the real world to the virtual world) synchronies can be supported via the tracking devices present with the HMD interaction system. Moreover, some designers use depth cameras, such as Kinect and Leap Motion to track the body and hand movement (e. g. [1], [35]). On top of relying on establishing Synchrony, the avatar design is an important aspect in establishing a sense of self-embodiment. Latoschik et al. [14] generated realistic avatars through relying on the image footage of 40 DSLR cameras of the faces of the users embodying them. Their solution enhanced the participant sense of realism, as it enhanced the overall human-likeness rates. Pan et al. [22] and Schwind et al. [27] argue that establishing a sense of self-embodiment enhances the user’s presence within the environment.

2.1.2 Presence and Social Presence in VR

The sense of Presence and Social Presence are one of the key quality metrics of the user social virtual experience. Shuemie et al. [26] identified Presence as the subjective feeling of existing in the virtual environment, meanwhile identified Social Presence, as the subjective feeling of others’ existence in the virtual environment. Parsons et al. [23] modelled social presence as a three-layered chain with Others’ Presence being the minimal level of social presence (recognition of interaction possibilities), Interactive Presence being the middle layer (recognition of others’ interactions directed towards self), and Shared Presence being the highest level of social presence which recognizes the holistic goals of others. Yassien et al. [34] theorize that establishing Presence is a prerequisite to establishing Social Presence. Moreover, feelings of presence and social presence increase the users’ perceived enjoyment [19]. Therefore, designers and researchers opt for designing experiences that increase the sense of presence and social presence within the virtual environment.

2.1.3 Presence and Social Presence in Hybrid Spaces

The traditional perspective in establishing presence depicts presence as perceptual, internal, or a conceptual experience [32]. Thus, separating the body from the mind. On the other hand, Gibsons theory of affordances [5] imply that regardless of the environment the user is present in, the origin of his(her) perception is rooted in his(her) actions and purposes. Following the same line, Turner modeled presence as a function of intentionality [31], where intentionality has four forms that are listed below with examples depicted from Wagner et al. [32]:

Corporeal Intentionality

(e. g. one’s body moves away from something)

Social Intentionality

(e. g. understanding one’s mental state and those of others)

Affective Intentionality

(e. g. experiencing emotions like fear or boredom)

Cognitive or Perceptual Intentionality

(e. g. brain-body link)

Turner’s model is in line with the Mantovani and Riva [17] perspective where actions and need for action is socially constructed. Waterworth and Hoshi [33] define user experience in hybrid spaces in terms of Embodiment (how they make sense of the environment) and Presence (how they know what to make sense of). They define Presence as (1) an attention-based state where users can act and carry out intentions and (2) the feeling of being located in a perceptual external world where users can identify both the self and non-self. They argue that presence is mainly achieved via proprioception and adaptive movement relative to the world around the user. This is similar to the ways Pan [22] use to establish Self-Embodiment in VEs.

Social Presence is defined as “being there” in a communication medium where multiple parties exchange information [12]. Similarly, Mennecke et al. [18] defines social presence as a framework that centers on virtual representations as a link to social interaction within VEs. Johnson et al. [12] states that social presence is measured via three subdimensions: (1) Access (feeling of being accessible and have access to the other), (2) Shared Environment (feeling of being within the same space), and (3) Proximity (feeling of being close to someone). This is in accord with the three sub-dimensions: (1) Symmetry (means of access to the VE), (2) Location (users’ location in the real world), and (3) Personal Space presented by Yassien et al. [34] to establish Others’ Presence, first and basic layer of social presence. Following the same line, Ijsselstein and Riva [24] modeled co-presence (i. e. awareness of others being with you in an environment [23], aka Others’ Presence) as the intersection between physical presence (i. e. existing in an environment [22]) and social presence (i. e. the degree of person to person awareness [2]).

This shift in perspective in presence and social presence design is in accord with our hybrid model for supporting social presence in VR [34]. Our model depicted presence to be a prerequisite of social presence. We based our model on the views of Schwind [27] and Slater [29] of presence and immersion which defines presence in terms of corporeal and perceptual intentionality only. Our definition of social presence is based on the model produced by Parsons’ et al. [23], where social presence is a function of intent. Therefore, our hybrid model for presence and social presence in VR can be applied to establish presence and social presence in hybrid spaces. However, centering presence around immersion is not relevant for hybrid space design, as this places the below assumptions depicted from Wagner et al. [32] in mind:

  1. being aware of the mediating technology is undesirable

  2. the experience is uniform and continuous. According Waterworth and Hoshi [33], our presence in hybrid space is split between the physical and real world

  3. presence is about replacing reality not augmenting it.

2.2 Sense of Agency Within Asymmetric VR Interfaces

Recently asymmetric interfaces emerged to include the non-immersed users that can not afford full immersion due to health and/or economic reasons. They are designed primarily for entertainment and living room environments, where only one HMD is accessible [7]. Gugenheimer et al. [7], [8] developed two asymmetric prototypes, “ShareVR” and “FaceDisplay” that aimed at increasing the non-HMD users’ presence and including them in the VE. “ShareVR” provides the ability to track the position of the non-HMD player using a 7-inch-display-mounted Vive controller, where the non-HMD user can view the virtual world through the display. In their other prototype, “FaceDisplay”, they provided the ability of the non-HMD user to interact with the HMD user in the physical space, meanwhile immersed in the VE, by attaching touch screens to the HMD. Their prototypes increased the presence and social interactions compared to basic systems that rely on TV and gamepads [7]. Moreover, they conclude that the unbalanced power distribution among the HMD user and non-HMD user in a co-located setup increased the edge for the non-HMD users in competitive activities [8]. Following the same pattern, Jansen et al. [10] developed an asymmetric AR prototype to include non-HMD users within the VE to increase the overall social interactions. They tracked the position of the non-HMD users using maker-tracking and provided a touch and gesture based input system. Moreover, Grandi et al. [6] investigated the effectiveness of an asymmetric VR-AR setup by conducting a user study that compared the user performance in collaborative tabletop interactions in a symmetric VR-VR/AR-AR setups with the asymmetric one. Their results show that users’ performance was significantly better in the asymmetric setup due to the interdependence adopted between the HMD and non-HMD users. Karaosmanoglu et al. [13] designed a visual-audio cue augmented collaborative asymmetric VR games in a co-located setup. The non-HMD user shared the same view of the HMD user through a PC-monitor. Users’ were instructed to navigate a virtual grid that contains traps and laser that their frequency increase according to the heart rate and excitement levels of the non-HMD user. They conclude that time is required for both players to adjust to the asymmetric setting, as coordinating and communicating was a challenge. Moreover, participants’ sense of agency and dominance in the VE is affected by interdependence among HMD and non-HMD users. Following the same line, Jeong et al. [11] developed an asymmetric interface for remote setups. In their work, non-HMD users use Oculus controllers to interact with the VE. The main task is to communicate some information to the HMD user. They offered no embodied avatars to the non-HMD users. However, the perspective of the non-HMD was an independent variable with three levels (first-person only, third-person only, first-and third person). Therefore, the HMD users’ role was associated with grabbing objects within the VE, meanwhile the non-HMD user participated or assisted them. They conducted a user-study to evaluate the quality of their interface and concluded that: (1) HMD users felt a higher sense of presence in the VE than the non-HMD did (using first person only or third person only), (2) non-HMD users felt more competent than the HMD users did, and (3) HMD user experienced more social interactions than the non-HMD did (using first person only or third person only). Zhang et al. [36] developed a “walking simulator” to support a higher sense of agency for both the HMD and non-HMD users. The walking simulator is composed of sensors connected to bluetooth module via arduino to track the position of the feet. Both users embodied full cartoon avatars with full-body tracking to perform a racing activity (pairs, groups of 3, competitive) and to navigate through a virtual maze (individual) to evaluate the effectiveness of their simulator in a co-located setup. They conclude that the increase in the number of non-HMD users in the group increased the experience of positive feelings in the virtual environment. Moreover, the HMD user experienced the same levels of presence as the non-HMD one.

2.3 Research Gap

Prior work included non-HMD users within the virtual environment by (1) showing them a “window” to the virtual world [7], (2) providing input systems for the non-HMD user in the form of VR-controllers [7], [11], (3) providing position tracking [10], (4) supporting full sense of agency using full-body tracking and natural locomotion [36]. In this work, we aim to provide the full sense of Self-Embodiment to the non-HMD user within the virtual environment, by (1) providing full realistic avatars as a user representation, (2) supporting both natural locomotion and full-body tracking using commodity hardware that is available to all user groups, and (3) providing the conventional controller based-input system using wireless commercial gaming controllers.

2.4 Design Considerations

Based on the output of our literature review and lessons learnt from related work, we present below a set of design considerations that we relied upon to meet our aim presented in section 2.3.

DC1: Provide self-embodiment for non-HMD user

Related work showed that full-body avatars establish higher sense of social presence in both fully virtual [30] and hybrid spaces [20]. Therefore, we represent both the HMD and non-HMD user with full-body avatar. Since avatars are full bodied and the presence of non-HMD is rooted in his/her action [5], [33], we support both visual-proprioception (rendering body parts where they are expected) and visual-motor (mirroring the user’s movement in the real world to the virtual world) synchronies for non-HMD users via off the shelf commodity full-body tracking devices (e. g. Microsoft Kinect v2 and Laptop’s internal webcam).

DC2: Immerse the non-HMD user in the activity

the non-HMD user have access to both the real and virtual world. Since the user’s presence is split between reality and VR [33], we will implement the virtual world environment in a way that always prompts the user to allocate his/her attention to the VE. Therefore, user’s presence and attention would be mostly given to the virtual world, as his(her) actions will be centered on the VE.

DC3: Establish co-presence for non-HMD user in remote setups

Since co-presence is the intersection between the user’s presence and social presence. It is achievable by making the user feel like he/she is present in a shared environment with another user. Therefore, supporting verbal communication and networking the non-HMD user gestures to the VR user is a must to provide access, shared environment, and proximity which achieve Co-Presence or Others’ Presence.

3 Game Design and Implementation

To address the research gap, we considered the presented design consideration to develop QuarantivityVR. Our prototype provides a full sense of self-embodiment to the non-HMD users in a remote asymmetric setup using affordable commodity hardware. The non-HMD user is presented in the virtual environment via full-body tracked realistic avatars. The non-HMD user movement can be tracked using either Microsoft’s Kinect sensor or a laptop’s webcam [DC1].

3.1 QuarantivityVR: Game Concept

QuarantivityVR is based on an Egyptian board game, Quarantivity, that emerged during the latest COVID-19 lockdown. The main aim behind the game is to engage the players in a variety of activities, such as impersonation and pictionary. QuarantivityVR adopts the same rationale. To this end, we offer the users three activities: (1) movie-guessing via gestures only, (2) spelling-Bee, and (3) mathematical questions. The users should perform each activity in 60 seconds [DC2]. During a single game round, a user performs the three activities in succession. In our game, we restricted the movie-guessing game to use only one-word titled movie, due to the lack of finger tracking in our prototype. The words that the users spell and mathematical questions are intermediate level ones, adopted from the original board game. In the game, there are two user roles:

Acting Player

is the player that performs the three activities in succession.

Judging Player

is the player that determines whether the acting player performed each of the activities correctly by pressing a green button shown in the environment or a red one denoting that the activity was not correctly performed.

3.2 QuarantivityVR: Game Design and Flow

Figure 2 
              Figure A shows the movie-guessing scene with the movie being displayed to the judging player. Figure B shows the to-be spelled word when displayed to the judging player. The word is pronounced to the acting player via an agent. Figure C shows the mathematical riddle that appears to the acting player.
Figure 2

Figure A shows the movie-guessing scene with the movie being displayed to the judging player. Figure B shows the to-be spelled word when displayed to the judging player. The word is pronounced to the acting player via an agent. Figure C shows the mathematical riddle that appears to the acting player.

The game starts by the user choosing to join or create a new room. Afterwards, they are redirected to a screen to choose their avatars. We provide two full-body realistic avatars (male, female). Once the user chooses the avatar, non-HMD users are prompted to specify whether they would like their movement to be tracked via the Kinect or the laptop’s webcam. Once the game starts, the judging player presses the red button to start the timer for the activities:

Movie-Guessing

Once the timer starts, the judging player sees a movie name in the game scene. The judging player should convey the movie name via gestures so that the acting player can guess it. Once the acting player makes the correct guess, the judging player presses a green button and one point is granted to the acting player. If the correct guess was not made or the time (60 s) elapsed, the judging player presses the red button.

Spelling-Bee

During this activity, an agent pronounce the to-be spelled word to the acting player and the word itself appears to the judging player game scene. Once acting user correctly spells the word out loud, the judging player presses the green button to increase the acting player score, otherwise the red button will be pressed.

Mathematical Questions

During this activity, the question will be visible to the acting player and the answer would be visible to the judging one. If the acting player answers the question correctly, the judging one presses the green button to grant the point for this activity. Otherwise, the red button is pressed.

3.3 Implementation

The game is implemented using Unity 3D. The implementation was compromised of two main parts, the VR aspect and the non-VR aspect. For supporting HMD-based interactions, HTC Vive headsets, controllers, and trackers were used along with RootMotion’s Final IK VRIK solver to support 5-point (head, 2 hands, 2 legs) full body tracking. As for the non-VR part, we relied on RF Solutions’ Kinect with MS-SDK package to support full body tracking via the kinect for our non-HMD users [DC1]. To support the full-body tracking via webcam, Digital Standard’s ThreeD Pose Unity Barracuda was used to estimate the users’ pose and movement. Finally, the game scenes were supported via Just Two’s Garden Decorations scene and 3D Everything’s Company Office package. The green (win) and red (lose) buttons were supported via Meanwhile on the Moon’s Levers buttons and Switches package in unity. Finally the networking between the users (HMD, non-HMD) is handled using Exit Games’ Photon2 and Photon Voice2 packages. Our setup compromised of (1) one VR ready laptop (MSI, Windows 10) [DC3], (2) HTC Vive headset and controllers, (3) one laptop (MSI, Windows 10, VR ready) with an internal webcam, and (4) Microsoft’s Kinect v2. The VR player and non-VR player are placed inside two separate rooms with a LAN connection between their laptops.

3.4 Pilot Test

In order to evaluate the effectiveness of our game, we pilot tested (N = 22, 11 pairs) the game in all the Role × (Kinect, Webcam) configurations in a within-subject setting. During the pilot test, we measured the user’s overall sense of presence and social presence, along with the user’s preferred device for the non-VR user after the users finish playing in the four configurations. The preference results were analyzed using the Chi-Square test, where users (both VR and non-VR) significantly preferred the kinect device (X2(1,N=22)=7.54, p<0.01) more than they did for the webcam, meanwhile playing the game. The Presence and Social Presence were measured using the iGroup Presence Questionnaire (IPQ) and the Competitive Social Presence in Team-Based Digital Games respectively. The mean values of the IPQ and Competitive Social Presence are shown in Figure 3, where users show high rates of Presence, Spatial Presence, and Awareness along with adequate Engagement rates.

Figure 3 
              The mean values of the pilot test results of the IPQ Questionnaire (Spatial Presence, Realism, Involvement, and Presence) and Competitive Social Presence (Awareness and Engagement).
Figure 3

The mean values of the pilot test results of the IPQ Questionnaire (Spatial Presence, Realism, Involvement, and Presence) and Competitive Social Presence (Awareness and Engagement).

Figure 4 
              Figures A, B, C shows the VR user, non-VR user using Kinect, and non-VR using Webcam while playing the game during the conducted user study respectively.
Figure 4

Figures A, B, C shows the VR user, non-VR user using Kinect, and non-VR using Webcam while playing the game during the conducted user study respectively.

4 Study Design

Since our initial results revealed promising engagement and presence rates, we conducted a wider scale within-subject user study (N = 24, 12 pairs) to test the full effect of non-HMD user agency and its effect on the HMD user gaming experience quality in terms of (1) presence and social presence, (2) users’ experience, (3) users’ perceived workload, and (4) users’ score. Our study had two independent variables:

Role

is user’s role within the game. It has two levels: Acting and Judging. In the Acting level, the user performs the three activities in succession and in the Judging level, the user determines whether the Acting player performed each activity correctly.

Device

is the non-VR user full-body tracking mechanism. It has two levels: Kinect, where the non-VR user movement is tracked via the Kinect sensor and Webcam, where the non-VR user movement is tracked via the laptop’s webcam.

4.1 Metrics

In our study, we relied on the following questionnaires along with the game score to measure our dependent variables:

iGroup Presence Questionnaire

is used to measure the users’ sense of Presence, Realism, Involvement, Spatial Presence [28].

Competitive Social Presence Questionnaire

is commonly used in team-based digital games to measure the users’ sense of social presence in terms of their Awareness and Engagement rates [25].

User Experience Questionnaire

is used to measure the users’ experience in the environment in terms of the environment’s Attractiveness, Perspicuity, Dependability, Stimulation, Novelty, and Efficiency [15].

Nasa Task Workload Index

is used to measure the users’ perceived workload in terms of Mental, Temporal, and Physical Demand as well as the perceived Performance, Effort, and Frustration rates, meanwhile playing the game [9].

4.2 Procedure

Each participant was greeted individually in a separate room. Afterwards, they signed a consent form that discussed the (1) nature of the study, (2) type of data that will be collected, and (3) acquisition of video and audio footage of the whole gameplay. Thereafter, they were familiarized with the setup. For example, VR users were shown how to use the headset with the controllers. The non-VR users were told where to stand and how to move in order to be tracked by either the Kinect or the laptop’s webcam. Once familiarized, they start to play the game. Four configurations were played for each (Webcam/Kinect) × Role configuration. After the pair finishes playing a single configuration, they fill the IPQ, Competitive Social Presence, UEQ, and Nasa-TLX questionnaires. The scores of both the VR and non-VR player were recorded in every configuration gameplay.

4.3 Participants

24 participants (14 males, 10 females) students from the German University in Cairo were recruited for the experiment via posting on social media groups and word-of-mouth. Their ages lie between 21 to 23 (M=22.16, SD=0.5). There were 4 male-female pairs, 3 female-female pair, and 5 male-male pairs.

Figure 5 
              The interaction plots of the user’s Presence and Involvement rates in both the VR/Kinect and VR/Webcam configurations. VR-Acting users experienced higher presence rates than both (Kinect, Webcam)-Acting ones. However, VR-Judging and Kinect-Judging users experienced the same presence rates, meanwhile VR-Judging users experienced higher presence than Webcam-Judging ones. Generally, VR users reported higher Involvement rates than non-VR (Kinect, Webcam) ones did.
Figure 5

The interaction plots of the user’s Presence and Involvement rates in both the VR/Kinect and VR/Webcam configurations. VR-Acting users experienced higher presence rates than both (Kinect, Webcam)-Acting ones. However, VR-Judging and Kinect-Judging users experienced the same presence rates, meanwhile VR-Judging users experienced higher presence than Webcam-Judging ones. Generally, VR users reported higher Involvement rates than non-VR (Kinect, Webcam) ones did.

5 Analysis and Results

We analyzed the measurements of user’s sense of (1) presence, (2) social presence, (3) user experience (UX), and (4) perceived workload, and (5) score in all the Role (Acting, Judging) × Tracking Mechanism (Webcam, Kinect) configurations. We considered only the design parameters related to the non-HMD user (role and tracking device), as the VR user and non-HMD user did not exchange places. Therefore, the VR modality comparisons are made via between-subject tests. Therefore, our analysis was made across a three step process. Firstly, we compared the measurements of the non-VR users while using Kinect to those while using Webcam (within-subject). Afterwards, we compared the measurements of the VR user to those of the non-VR user in both configurations, Kinect and Webcam (between-subject). For each of the three comparisons (Kinect-Webcam, VR-Kinect, VR-Webcam), we analyzed the (1) presence, (2) social presence, (3) user experience, and (4) perceived workload results using non-parametric analysis of variance (ANOVA) on aligned rank transformed (ART) data, meanwhile the score results were analyzed using Wilcoxon Signed Rank test, as the score is only calculated while performing the activity in the Acting role only.

5.1 Kinect/Webcam Results

Figure 6 
              The box plots showing the Kinect-Webcam rates of Dependability, Attractiveness, Stimulation, and Effort, respectively. Acting users reported higher dependability rates than judging ones, meanwhile Kinect users reported higher attractiveness, stimulation, and effort rates than Webcam ones.
Figure 6

The box plots showing the Kinect-Webcam rates of Dependability, Attractiveness, Stimulation, and Effort, respectively. Acting users reported higher dependability rates than judging ones, meanwhile Kinect users reported higher attractiveness, stimulation, and effort rates than Webcam ones.

5.1.1 Presence and Social Presence

The results of the non-parametric ANOVA based on ART showed insignificant effect of Device (F(1,33)=1.95, p=0.172), Role (F(1,33)=2.04, p=0.163), and Device × Role interaction (F(1,33)=0.004, p=0.95) on Presence. Moreover, it showed insignificant effect of Device (F(1,33)=0.92, p=0.34), Role (F(1,33)=3.62, p=0.06), and Device × Role interaction (F(1,33)=0.08, p=0.77) on Spatial Presence. Similarly, no significant effect of Device (F(1,33)=3.25, p=0.08), Role (F(1,33)=0.76, p=0.39), nor Device × Role interaction (F(1,33)=0.70, p=0.41) was observed on Awareness. The results of Engagement showed insignificant effect of Device (F(1,33)=0.001, p=0.97), and Role (F(1,33)=1.03, p=0.32), and Device × Role interaction (F(1,33)=2.69, p=0.11) as well. Additionally, no significant effect of Device (F(1,33)=0.088, p=0.77), Role (F(1,33)=2.43, p=0.13), nor Device × Role interaction (F(1,33)=0.56, p=0.46) on Realism was observed. Following the same pattern, insignificant effect of Device (F(1,33)=0.06, p=0.8), Role (F(1,33)=0.01, p=0.91), and Device × Role interaction (F(1,33)=0.15, p=0.69) on Involvement was observed.

5.1.2 User Experience

The results of the non-parametric ANOVA based on ART showed a significant effect of Role (F(1,33)=7.04, p<0.05) on Dependability, where the judging users (M=1.94, SD=0.93) showed higher dependability rates than acting ones (M=1.57, SD=0.74). However, no significant effect of Device (F(1,33)=0.035, p=0.85) nor Device × Role interaction (F(1,33)=0.34, p=0.56) was shown. On the other hand, a significant effect of Device (F(1,33)=6.7, p<0.05) on Stimulation was observed, where the Kinect users (M=2.69, SD=0.63) showed higher stimulation rates than Webcam ones (M=2.2, SD=0.93). However, no significant effect of Role (F(1,33)=4.03, p=0.053) nor Device × Role interaction (F(1,33)=1.01, p=0.32) was observed. Along the same line, a significant effect of Device (F(1,33)=8.32, p<0.05) on Attractiveness was shown, where the Kinect users (M=2.82, SD=0.34) showed higher attractiveness rates than Webcam ones (M=2.34, SD=0.71). However, no significant effect of Role (F(1,33)=0.04, p=0.84) nor Device × Role interaction (F(1,33)=0.27, p=0.61) was detected. On a side note, no significance was observed for:

  1. Perspicuity: Device (F(1,33)=3.53, p=0.07), Role (F(1,33)=0.82, p=0.37), and Device × Role interaction (F(1,33)=0.17, p=0.68).

  2. Efficiency: Device (F(1,33)=0.21, p=0.65), Role (F(1,33)=0.28, p=0.60), and Device × Role interaction (F(1,33)=0.025, p=0.88).

  3. Novelty: Device (F(1,33)=2.4, p=0.13), Role (F(1,33)=1.37, p=0.25), and Device × Role interaction (F(1,33)=1.35, p=0.25).

5.1.3 Perceived Workload

Figure 7 
                The interaction plots showing the rates of Realism, Engagement, Temporal Demand, Effort, Performance rates obtained in the VR-Kinect configurations respectively. The results of Performance are inversely coded. Therefore, VR-Acting users perceived their performance to be better than that of the Kinect-Acting ones. Moreover, Kinect-Judging users perceived the activity to be more temporally demanding than VR-Judging ones did. VR users perceived the virtual environment to be more realistic than Kinect users did. However, Kinect-Acting users reported higher Engagement rates than the VR-Acting did. Generally, Acting users reported to exert more Effort while playing the game than Judging users did.
Figure 7

The interaction plots showing the rates of Realism, Engagement, Temporal Demand, Effort, Performance rates obtained in the VR-Kinect configurations respectively. The results of Performance are inversely coded. Therefore, VR-Acting users perceived their performance to be better than that of the Kinect-Acting ones. Moreover, Kinect-Judging users perceived the activity to be more temporally demanding than VR-Judging ones did. VR users perceived the virtual environment to be more realistic than Kinect users did. However, Kinect-Acting users reported higher Engagement rates than the VR-Acting did. Generally, Acting users reported to exert more Effort while playing the game than Judging users did.

A significant effect of Role (F(1,33)=12.1, p<0.05) and Device (F(1,33)=6.27, p<0.05) on Effort was shown, where the acting users (M=2, SD=1.56) showed higher effort rates than judging ones (M=1.08, SD=1.53), and the Kinect users(M=1.83, SD=1.61) showed higher effort rates than Webcam ones (M=1.25, SD=1.57). However, no significant effect of Device × Role interaction (F(1,33)=3.19, p=0.08) was observed. On a side note, no significance was observed for the following:

  1. Mental Demand: Device (F(1,33)=0.64, p=0.43), Role (F(1,33)=0.61, p=0.44), and Device × Role interaction (F(1,33)=0.15, p=0.70).

  2. Physical Demand: Device (F(1,33)=0.39, p=0.54), Role (F(1,33)=1.49, p=0.23), and Device × Role interaction (F(1,33)=0.2, p=0.66).

  3. Temporal Demand: Device (F(1,33)=2.59, p=0.12), Role (F(1,33)=1.54, p=0.22), and Device × Role interaction (F(1,33)=0.34, p=0.56).

  4. Performance: Device (F(1,33)=0.52, p=0.48), Role (F(1,33)=3.39, p=0.07), and Device × Role interaction (F(1,33)=0.99, p=0.33).

  5. Frustration: Device (F(1,33)=1.5, p=0.23), Role (F(1,33)=0.06, p=0.81), Device × Role interaction (F(1,33)=0.09, p=0.76).

5.1.4 Scores

Wilcoxon Signed Rank test results showed that here was no significant difference in scores between the kinect and webcam (Z=0.54, p=0.59) configurations.

5.2 VR/Kinect Results

5.2.1 Presence and Social Presence

The results of the non-parametric ANOVA based on ART showed a significant effect of Device × Role interaction (F(1,22)=4.41, p<0.05) on Presence, where the VR-Acting users (M=5.58, SD=0.90) showed higher presence rates than Kinect-Acting ones (M=4.92, SD=1.31). However, no significant effect of Device (F(1,22)=1.01, p=0.33) nor Role (F(1,22)=1.27, p=0.27) was shown. On the other hand, a significant effect of Device × Role interaction (F(1,22)=4.86, p<0.05) on Engagement was observed, where the Kinect-Acting users (M=5.07, SD=0.93) showed higher engagement rates than VR-Acting ones (M=4.29, SD=1.03). However, no significant effect of Device (F(1,22)=1.13, p=0.299) nor Role (F(1,22)=0.04, p=0.85) was observed. Moreover, a significant effect of Device (F(1,22)=9.41, p<0.05) on Involvement was shown, where the VR users (M=4.36, SD=1.40) showed higher involvement rates than Kinect ones (M=2.57, SD=1.38). However, no significant effect of Role (F(1,22)=0.003, p=0.96) nor Device × Role interaction (F(1,22)=0.51, p=0.48) was detected. Following the same line, a significant effect of Device (F(1,22)=5.53, p<0.05) on Realism was detected, where the VR users (M=4.875, SD=0.92) showed higher realism rates than Kinect ones (M=4.16, SD=0.79). However, no significant effect of Role (F(1,22)=3.09, p=0.09) nor Device × Role interaction (F(1,22)=2.78, p=0.11) was observed. On a side note, no significance was observed for:

  1. Spatial Presence: Device (F(1,22)=3.68, p=0.07), Role (F(1,22)=4.07, p=0.06), and Device × Role interaction (F(1,22)=2.89, p=0.10).

  2. Awareness: Device (F(1,22)=0.0003, p=0.99), Role (F(1,22)=0.001, p=0.97), and Device × Role interaction (F(1,22)=1.87, p=0.19).

5.2.2 User Experience

The results of the non-parametric ANOVA based on ART showed insignificance for:

  1. Attractiveness: Device (F(1,22)=0.71, p=0.41), Role (F(1,22)=1.11, p=0.30), and Device × Role interaction (F(1,22)=0.01, p=0.89).

  2. Perspicuity: Device (F(1,22)=0.55, p=0.47), Role (F(1,22)=0.39, p=0.54), and Device × Role interaction (F(1,22)=1.19, p=0.29).

  3. Efficiency: Device (F(1,22)=0.002, p=0.96), Role (F(1,22)=0.05, p=0.83), and Device × Role interaction (F(1,22)=0.68, p=0.42).

  4. Dependability: Device (F(1,22)=0.02, p=0.90), Role (F(1,22)=1.19, p=0.29), and Device × Role interaction (F(1,22)=2.51, p=0.13).

  5. Stimulation: Device (F(1,22)=0.09, p=0.77), and Role (F(1,22)=1.21, p=0.28), and Device × Role interaction (F(1,22)=0.02, p=0.90).

  6. Novelty: Device (F(1,22)=0.12, p=0.73), and Role (F(1,22)=0.003, p=0.96), and Device × Role interaction (F(1,22)=0.0008, p=0.98).

5.2.3 Perceived Workload

The results of the non-parametric ANOVA based on ART showed a significant effect of Device × Role interaction (F(1,22)=5.98, p<0.05) on Temporal Demand, where the Kinect-Judging users (M=1.5, SD=1.31) showed higher temporal demand rates than VR-Judging ones (M=0.33, SD=0.89). Moreover, results showed a significant effect of Device (F(1,22)=4.79, p<0.05), where the Kinect users (M=1.29, SD=1.19) showed higher temporal demand rates than VR ones (M=0.5, SD=1.06). However, there was no significant effect of Role (F(1,22)=0.57, p=0.46). Similarly, a significant effect of Device × Role interaction (F(1,22)=5.19, p<0.05) on Performance was observed, where the Kinect-Acting users (M=1.5, SD=1.68) showed worse performance rates than VR-Acting ones (M=0.33, SD=0.65). The performance rates are inversely coded in the original questionnaire. Moreover, results showed a significant effect of Role (F(1,22)=6.9, p<0.05), where the acting users (M=0.9, SD=1.38) showed higher performance rates than judging ones (M=0.45, SD=0.83). However, there was no significant effect of Device (F(1,22)=4.11, p=0.055). As expected, a significant effect of Role (F(1,22)=11.5, p<0.05) on Effort, where the acting users (M=2.04, SD=1.81) showed higher effort rates than judging ones (M=1.1, SD=1.41). However, no significant effect of Device (F(1,22)=1.85, p=0.19) nor Device × Role interaction (F(1,22)=3.75, p=0.07) was shown. On a side note, no significance was observed for:

  1. Mental Demand: Device (F(1,22)=0.31, p=0.58), Role (F(1,22)=2.90, p=0.10), and Device × Role interaction (F(1,22)=0.19, p=0.66).

  2. Physical Demand: Device (F(1,22)=2.27, p=0.15), Role (F(1,22)=2.16, p=0.16), and Device × Role interaction (F(1,22)=0.13, p=0.72).

  3. Frustration: Device (F(1,22)=0.04, p=0.85), Role (F(1,22)=0.13, p=0.72), and Device × Role interaction (F(1,22)=1.19, p=0.29).

5.2.4 Scores

Wilcoxon Signed Rank test results showed that here was no significant difference in scores between the VR and Kinect users (Z=0.32, p=0.75).

5.3 VR/Webcam Results

Figure 8 
              The box plots of Attractiveness, Stimulation, and Mental Demand obtained in the VR-Webcam configuration. VR users perceived their experience to be more attractive and stimulating than Webcam users did. Moreover, acting users reported higher mental demand than judging ones did.
Figure 8

The box plots of Attractiveness, Stimulation, and Mental Demand obtained in the VR-Webcam configuration. VR users perceived their experience to be more attractive and stimulating than Webcam users did. Moreover, acting users reported higher mental demand than judging ones did.

5.3.1 Presence and Social Presence

The results of the non-parametric ANOVA based on ART showed a significant effect of Device × Role interaction (F(1,22)=9.29, p<0.05) on Presence, where the VR-Acting users (M=5.58, SD=0.90) showed higher presence rates than Webcam-Acting ones (M=4.33, SD=1.92). However, no significant effect of Device (F(1,22)=2.72, p=0.11) nor Role (F(1,22)=1.53, p=0.23) was observed. Similarly, a significant effect of Device (F(1,22)=12.32, p<0.05) on Involvement was observed, where the VR users (M=4.36, SD=1.40) showed higher involvement rates than Webcam ones (M=2.48, SD=1.41). However, no significant effect of Role (F(1,22)=0.11, p=0.75) nor Device × Role interaction (F(1,22)=0.25, p=0.62) was shown. On a side note no significance is shown for:

  1. Spatial Presence: Device (F(1,22)=3.52, p=0.074), Role (F(1,22)=0.57, p=0.46), and Device × Role interaction (F(1,22)=0.16, p=0.69).

  2. Awareness: Device (F(1,22)=1.11, p=0.30), Role (F(1,22)=1.64, p=0.21), and Device × Role interaction (F(1,22)=3.83, p=0.063).

  3. Engagement: Device (F(1,22)=1.16, p=0.29), Role (F(1,22)=3.3, p=0.083), and Device × Role interaction (F(1,22)=1.92, p=0.18).

  4. Realism: Device (F(1,22)=3.66, p=0.069), Role (F(1,22)=1.17, p=0.29), and Device × Role interaction (F(1,22)=0.41, p=0.53).

5.3.2 User Experience

The results of the non-parametric ANOVA based on ART showed a significant effect of Device (F(1,22)=5.54, p<0.05) and Role (F(1,22)=8.55, p<0.05) on Stimulation, where the VR users (M=2.72, SD=0.50) showed higher stimulation rates than Webcam ones (M=2.18, SD=0.93), and the acting users (M=2.59, SD=0.73) showed higher stimulation rates than judging ones (M=2.3, SD=0.83). However, there was no significant Device × Role interaction (F(1,22)=2.42, p=0.13). Similarly, a significant effect of Device (F(1,22)=6.33, p<0.05) on Attractiveness was observed, where the VR users (M=2.84, SD=0.36) showed higher attractiveness rates than Webcam ones (M=2.34, SD=0.71). However, no significant effect of Role (F(1,22)=0.05, p=0.82) nor Device × Role interaction (F(1,22)=0.22, p=0.64) was shown. On a side note, no significance was reported for:

  1. Perspicuity: Device (F(1,22)=0.13, p=0.72), Role (F(1,22)=0.41, p=0.53), and Device × Role interaction (F(1,22)=0.86, p=0.36).

  2. Efficiency: Device (F(1,22)=0.11, p=0.74), Role (F(1,22)=0.86, p=0.36), and Device × Role interaction (F(1,22)=0.60, p=0.45).

  3. Dependability: Device (F(1,22)=0.013, p=0.91), Role (F(1,22)=1.14, p=0.3), and Device × Role interaction (F(1,22)=3.15, p=0.09).

  4. Novelty: Device (F(1,22)=2.19, p=0.15), Role (F(1,22)=3.22, p=0.087), and Device × Role interaction (F(1,22)=2.99, p=0.098).

5.3.3 Perceived Workload

The results of the non-parametric ANOVA based on ART showed a significant effect of Role (F(1,22)=6.4, p<0.05) on Mental Demand, where the acting users (M=1.58, SD=1.72) showed higher mental demand rates than judging ones (M=1.00, SD=1.35). However, no significant effect of Device (F(1,22)=0.07, p=0.79) nor Device × Role interaction (F(1,22)=2.74, p=0.11) was observed. Moreover, no significance was observed for:

  1. Physical Demand: Device (F(1,22)=0.77, p=0.39), Role (F(1,22)=1.93, p=0.18), and Device × Role interaction (F(1,22)=0.00072, p=0.98).

  2. Temporal Demand: Device (F(1,22)=0.61, p=0.44), Role (F(1,22)=0.75, p=0.40), and Device × Role interaction (F(1,22)=0.92, p=0.35).

  3. Performance: Device (F(1,22)=3.82, p=0.063), Role (F(1,22)=1.85, p=0.19), and Device × Role interaction (F(1,22)=1.37, p=0.25).

  4. Effort: Device (F(1,22)=0.004, p=0.95), Role (F(1,22)=3.5, p=0.075), and Device × Role interaction (F(1,22)=0, p=1).

  5. Frustration: Device (F(1,22)=0.42, p=0.52), Role (F(1,22)=0.38, p=0.55), and Device × Role interaction (F(1,22)=0.87, p=0.36).

5.3.4 Scores

Wilcoxon Signed Rank test results showed that here was no significant difference in scores between the VR and Kinect users (Z=0.14, p=0.89).

6 Discussion

6.1 User’s Presence Is Dependent on the User Role and Immersion Rates

The user’s sense of presence and social presence is influenced by Synchrony among other things [34]. In accord with Jeong et al. [11] and unlike Zhang et al.’s findings [36], our results show that users experienced more presence when using more immersive medium or while conducting an engaging activity. For instance, VR users experienced higher presence than Webcam users did, and VR-Acting users experienced higher presence than Kinect-Acting users. However, VR-Judging users reported close presence rates to Kinect-Judging ones. We attribute the difference between our findings and those of Zhang et al.’s to the disparity of the granularity of the tracking mechanisms adopted between VR and non-VR users, as we used 5-point tracking for VR users, meanwhile relied on depth cameras and webcams for non-VR. However, Zhang et al. [36] used their walking simulator prototype (accurate feet tracking) for both the VR and non-VR users. Additionally, VR users viewed the environment to be more realistic than Kinect users did. Moreover, VR users were more involved with the environment than the non-VR did, while using either Kinect or Webcam. Since non-HMD perceive both worlds, events in the real world might distract them or detract from their feeling of presence in the VE, as Waterworth and Hoshi [33] states. Therefore, presence for the non-HMD user is dependent not only on immersion, but on the number of interactions that need to be performed in the real world vs. the virtual one [16].

6.2 User’s Social Presence Is Higher in Kinect-Acting than VR-Acting

Unlike Jeong et al.’s [11], Kinect-Acting users experienced higher social presence (engagement) rates than the VR-Acting ones. The difference can be attributed to providing a full sense of embodiment in our setup, as we used full-body avatars that supported visual-motor synchrony, meanwhile Jeong et al. did not offer embodied avatar in their setup. This finding can be attributed to the users’ role, as users are mostly engaged when performing the activity. Moreover, the activity is very time demanding, as users need to perform it only 1 minute. Additionally, in the judging roles the engagement rates were very similar for both VR and Kinect users. The reason this phenomena did not occur in the VR vs. Webcam results can be attributed to the results presented in 6.3. This finding implies that high levels of presence and immersion are not necessary to establish high social presence rates. However, the user only needs to feel present in the VE to perceive the existence of the partner i. e. experience social presence. It acts as a step forward in supporting self-embodiment for the non-HMD users, enabling them to interact and engage with the virtual environment like VR users can.

6.3 Kinect Offers More Stimulating and Attractive Experience than Webcam

Users playing the game using Kinect perceived their experience to be more attractive and stimulating than those who played using the Webcam. However, Kinect users exerted more effort, meanwhile playing the game, than the Webcam users did. These results are in accord with the user preference results obtained from our pilot test, where users preferred playing with Kinect or preferred that their partner play with Kinect than they did with Webcam. Overall, no significance in presence, social presence, or performance (perceived performance and score) observed between the Kinect and Webcam. Meanwhile Kinect is more preferred and offer better user experience, the users can perform different activities using the Webcam, as it is more affordable and accessible to all target user groups. The user preference to Kinect v2 was expected, as Kinect v2 tracking quality is more accurate than that of the webcam.

6.4 Acting Role Induced Higher Perceived Workload Rates

During the Acting role, users performed three activities in succession, namely (1) movie-guessing using gestures, (2) spelling bee, and (3) answering mathematical questions. Although the acting role had higher presence, engagement, and dependability rates, users reported higher mental demand and exerted more effort, meanwhile playing the game. On the other hand, Kinect-Judging users reported higher temporal demand than VR users did. However, acting users had better perceived performance rates, when playing the game in VR. Our finding is attributed to using various activities. For example, users had to rely rapidly on memory in order to spell words correctly in the spelling bee activity within the allocated 60 seconds. They also performed time-demanding cognitive activity, meanwhile answering the mathematical questions. Similarly, the movie guessing activity required that users think fast of what gestures to make so that the judging user can easily guess the movie within one minute. Therefore, providing users with equal roles, where they can interact and engage properly with the environment and other players is a key factor in enhancing the presence, social presence, and perceived performance of users, creating an enjoyable realistic social virtual experience. However, it is advisable to make the activities performed less time-demanding to avoid increasing the users’ overall perceived workload.

7 Conclusion

The recent advances in the immersive devices development created a sense of exclusion to user groups that can not afford full immersion for health or economic reasons. Consequently, researchers designed various asymmetric interfaces in an effort to include non-VR (non-HMD) users. To support self-embodiment in asymmetric VR, we designed an asymmetric social VR game QuarantivityVR, a VR adoption of the Egyptian board game Quarantivity. During the game, the users had two roles, acting and judging. Acting users performed three main activities (movie-guessing, spelling-bee, and solving mathematical questions). The non-HMD user are represented via full body avatars and their movements were mirrored to the virtual world via Kinect sensor or laptop’s webcam. We measured the overall user’s sense of (1) presence, (2) social presence, (3) user experience (UX), and (4) perceived workload and users’ game score in all the Role (Acting, Judging) × Tracking Mechanism (Webcam, Kinect) configurations, by designing a 2 × 2 within subject study (N = 24, 12 pairs). Afterwards, we compared our measurements while the non-VR users played using the Kinect to those while they played using laptop’s Webcam. Moreover, we compared the measurements of the VR user to those of the non-VR in both non-VR configurations (Kinect, Webcam). Our results show that users feel higher presence and involvement in the VR configuration. However, non-VR users using Kinect reported higher social presence rates in terms of engagement than VR users. Moreover, non-VR users using Kinect perceived their experience to be more stimulating and attractive than those using Webcam. Although no significant difference in user performance is observed between using Webcam and using Kinect for the non-VR users in terms of both perceived performance and score, VR users perceived their performance to be better than non-VR users using Kinect. Therefore, we plan to explore the effect of supporting self-embodiment for non-VR users in asymmetric social VR activities that focus more on interacting with the environment than interacting with other players. Finally, we envision that our work will aid researchers in creating enjoyable asymmetric social experiences and act as a step towards facilitating social interactions in physical and virtual spaces.

About the authors

Amal Yassien

Amal Yassien is a PhD. student and an assistant lecturer at the German International University in Cairo (GIU). She obtained both her M. Sc. (September 2021) and B. Sc. (July 2017) thesis in Computer Science and Engineering from the German University in Cairo (GUC). Her masters’ topic was about enhancing social VR experiences via augmenting nonverbal cues. Her current research interest lies in exploring the future of social augmented reality platforms and enhancing users’ social experience within these platforms.

Mohamed Ahmed Soliman

Mohamed Ahmed Soliman is B. Sc. holder from the faculty of Media Engineering and Technology at the German University in Cairo (GUC). Mohamed is currently doing his premasters in Computer Science and Engineering at the GUC Berlin Campus. The work presented in this manuscript is his bachelor thesis project.

Slim Abdennadher

Slim Abdennadher is the President of the German International University and a Full Professor in Computer Science, Faculty of Informatics and Computer Science, German International University (GIU) in New Administrative Capital, Egypt. Prof. Dr. Abdennadher earned his doctoral degree in 1998 and the habilitation followed in 2001 from the Ludwig-Maximilians-University Munich, Germany. He conducted his Bachelor as well as Master degrees at the University of Kaiserslautern, Germany. For the period between 2011 and 2021, Prof. Dr. Abdennadher held the position of Vice President for Academic and Research Affairs at the German University. In addition to being the Acting Dean of the Faculty of Media Engineering and Technology as well as the study dean for all engineering faculties from 2004. For his period of appointment, he was also responsible for many duties like heading the examination committee, International Accreditation, Research Committee and many others. His research focuses on constraint programming, human computer interaction, serious games and affective computing at which he authored three books and more than 120 peer-reviewed publications. Personally, he participated in many numerous prestigious international conferences and academic workshops whether as general chair, part of the organizing committee, or as member of the academic program committee. He has also brought to his research groups many research funds through projects that tackle different areas of research.

References

[1] S AlAwadhi, N AlHabib, D Murad, F AlDeei, M AlHouti, T Beyrouthy, and S Al-Kork. 2017. Virtual reality application for interactive and informative learning. In 2017 2nd International Conference on Bio-engineering for Smart Technologies (BioSMART). IEEE, 1–4.10.1109/BIOSMART.2017.8095336Search in Google Scholar

[2] Steven R Aragon. 2003. Creating social presence in online environments. New directions for adult and continuing education 2003, 100 (2003), 57–68.10.1002/ace.119Search in Google Scholar

[3] Raffaello Brondi, Leila Alem, Giovanni Avveduto, Claudia Faita, Marcello Carrozzino, Franco Tecchia, and Massimo Bergamasco. 2015. Evaluating the impact of highly immersive technologies and natural interaction on player engagement and flow experience in games. In International Conference on Entertainment Computing. Springer, 169–181.10.1007/978-3-319-24589-8_13Search in Google Scholar

[4] Raffaello Brondi, Giovanni Avveduto, Leila Alem, Claudia Faita, Marcello Carrozzino, Franco Tecchia, Y Pisan, and Massimo Bergamasco. 2015. Evaluating the effects of competition vs collaboration on user engagement in an immersive game using natural interaction. In Proceedings of the 21st ACM Symposium on Virtual Reality Software and Technology. ACM, 191.10.1145/2821592.2821643Search in Google Scholar

[5] James J Gibson. 1977. The theory of affordances. Hilldale, USA 1, 2 (1977), 67–82.Search in Google Scholar

[6] Jerônimo Gustavo Grandi, Henrique Galvan Debarba, and Anderson Maciel. 2019. Characterizing asymmetric collaborative interactions in virtual and augmented realities. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 127–135.Search in Google Scholar

[7] Jan Gugenheimer, Evgeny Stemasov, Julian Frommel, and Enrico Rukzio. 2017. Sharevr: Enabling co-located experiences for virtual reality between hmd and non-hmd users. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 4021–4033.10.1145/3025453.3025683Search in Google Scholar

[8] Jan Gugenheimer, Evgeny Stemasov, Harpreet Sareen, and Enrico Rukzio. 2018. FaceDisplay: Towards Asymmetric Multi-User Interaction for Nomadic Virtual Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 54.10.1145/3173574.3173628Search in Google Scholar

[9] Sandra G Hart. 2006. NASA-task load index (NASA-TLX); 20 years later. In Proceedings of the human factors and ergonomics society annual meeting, Vol. 50. Sage publications Sage CA: Los Angeles, CA, 904–908.10.1177/154193120605000909Search in Google Scholar

[10] Pascal Jansen, Fabian Fischbach, Jan Gugenheimer, Evgeny Stemasov, Julian Frommel, and Enrico Rukzio. 2020. ShARe: Enabling Co-Located Asymmetric Multi-User Interaction for Augmented Reality Head-Mounted Displays. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 459–471.10.1145/3379337.3415843Search in Google Scholar

[11] Kisung Jeong, Jinmo Kim, Mingyu Kim, Jiwon Lee, and Chanhun Kim. 2020. Asymmetric interface: user interface of asymmetric virtual reality for new presence and experience. Symmetry 12, 1 (2020), 53.10.3390/sym12010053Search in Google Scholar

[12] Erika Katherine Johnson and Seoyeon Celine Hong. 2020. Instagramming Social Presence: A Test of Social Presence Theory and Heuristic Cues on Instagram Sponsored Posts. International Journal of Business Communication (2020), 2329488420944462.10.1177/2329488420944462Search in Google Scholar

[13] Sukran Karaosmanoglu, Katja Rogers, Dennis Wolf, Enrico Rukzio, Frank Steinicke, and Lennart E Nacke. 2021. Feels like team spirit: Biometric and strategic interdependence in asymmetric multiplayer VR games. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15.10.1145/3411764.3445492Search in Google Scholar

[14] Marc Erich Latoschik, Daniel Roth, Dominik Gall, Jascha Achenbach, Thomas Waltemate, and Mario Botsch. 2017. The effect of avatar realism in immersive social virtual realities. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology. ACM, 39.10.1145/3139131.3139156Search in Google Scholar

[15] Bettina Laugwitz, Theo Held, and Martin Schrepp. 2008. Construction and evaluation of a user experience questionnaire. In Symposium of the Austrian HCI and usability engineering group. Springer, 63–76.10.1007/978-3-540-89350-9_6Search in Google Scholar

[16] Stefan Liszio and Maic Masuch. 2016. Designing shared virtual reality gaming experiences in local multi-platform games. In International Conference on Entertainment Computing. Springer, 235–240.10.1007/978-3-319-46100-7_23Search in Google Scholar

[17] Giuseppe Mantovani and Giuseppe Riva. 1999. “Real” presence: how different ontologies generate different criteria for presence, telepresence, and virtual presence. Presence‘ 8, 5 (1999), 540–550.10.1162/105474699566459Search in Google Scholar

[18] Brian E Mennecke, Janea L Triplett, Lesya M Hassall, and Zayira Jordan Conde. 2010. Embodied social presence theory. In 2010 43rd Hawaii international conference on system sciences. IEEE, 1–10.10.1109/HICSS.2010.179Search in Google Scholar

[19] Catherine S Oh, Jeremy N Bailenson, and Gregory F Welch. 2018. A Systematic Review of Social Presence: Definition, Antecedents, and Implications. Front. Robot. AI 5. doi: 10.3389/frobt (2018), 114.Search in Google Scholar

[20] Niklas Osmers, Michael Prilla, Oliver Blunk, Gordon George Brown, Marc Janßen, and Nicolas Kahrl. 2021. The Role of Social Presence for Cooperation in Augmented Reality on Head Mounted Devices: A Literature Review. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–17.10.1145/3411764.3445633Search in Google Scholar

[21] Kaitlyn M Ouverson and Stephen B Gilbert. 2021. A Composite Framework of Co-located Asymmetric Virtual Reality. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (2021), 1–20.10.1145/3449079Search in Google Scholar

[22] Xueni Pan and Antonia F de C Hamilton. 2018. Why and how to use virtual reality to study human social interaction: The challenges of exploring a new research landscape. British Journal of Psychology (2018).10.1111/bjop.12290Search in Google Scholar PubMed PubMed Central

[23] Thomas D Parsons, Andrea Gaggioli, and Giuseppe Riva. 2017. Virtual reality for research in social neuroscience. Brain sciences 7, 4 (2017), 42.10.3390/brainsci7040042Search in Google Scholar PubMed PubMed Central

[24] Giuseppe Riva, Fabrizio Davide, and Wijnand A IJsselsteijn. 2003. Being there: The experience of presence in mediated environments. Being there: Concepts, effects and measurement of user presence in synthetic environments 5 (2003).Search in Google Scholar

[25] Fiona M Rivera, Fons Fuijk, and Ebroul Izquierdo. 2015. Navigation in REVERIE’s virtual environments. In 2015 IEEE Virtual Reality (VR). IEEE, 273–274.10.1109/VR.2015.7223401Search in Google Scholar

[26] Martijn J Schuemie, Peter Van Der Straaten, Merel Krijn, and Charles APG Van Der Mast. 2001. Research on presence in virtual reality: A survey. CyberPsychology & Behavior 4, 2 (2001), 183–201.10.1089/109493101300117884Search in Google Scholar PubMed

[27] Valentin Schwind. 2018. Implications of the uncanny valley of avatars and virtual characters for human-computer interaction. (2018).Search in Google Scholar

[28] Valentin Schwind, Pascal Knierim, Nico Haas, and Niels Henze. 2019. Using presence questionnaires in virtual reality. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–12.10.1145/3290605.3300590Search in Google Scholar

[29] Mel Slater. 2003. A note on presence terminology. Presence connect 3, 3 (2003), 1–5.Search in Google Scholar

[30] Harrison Jesse Smith and Michael Neff. 2018. Communication Behavior in Embodied Virtual Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 289.Search in Google Scholar

[31] Phil Turner. 2007. The intentional basis of presence. In Proceedings of the 10th international workshop on presence. 127–134.Search in Google Scholar

[32] Ina Wagner, Wolfgang Broll, Giulio Jacucci, Kari Kuutii, Rod McCall, Ann Morrison, Dieter Schmalstieg, and Jean-Jacques Terrin. 2009. On the role of presence in mixed reality. Presence 18, 4 (2009), 249–276.10.1162/pres.18.4.249Search in Google Scholar

[33] John Alexander Waterworth and Kei Hoshi. 2016. Human-experiential design of presence in everyday blended reality. Springer.10.1007/978-3-319-30334-5Search in Google Scholar

[34] Amal Yassien, Passant ElAgroudy, Elhassan Makled, and Slim Abdennadher. 2020. A Design Space for Social Presence in VR. In Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society. 1–12.10.1145/3419249.3420112Search in Google Scholar

[35] Amal Yassien, ElHassan B Makled, Passant Elagroudy, Nouran Sadek, and Slim Abdennadher. 2021. Give-Me-A-Hand: The Effect of Partner’s Gender on Collaboration Quality in Virtual Reality. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–6.10.1145/3411763.3451601Search in Google Scholar

[36] Qimeng Zhang, Ji-Su Ban, Mingyu Kim, Hae Won Byun, and Chang-Hun Kim. 2021. Low-Asymmetry Interface for Multiuser VR Experiences with Both HMD and Non-HMD Users. Sensors 21, 2 (2021), 397.10.3390/s21020397Search in Google Scholar PubMed PubMed Central

Published Online: 2022-04-01
Published in Print: 2022-04-26

© 2022 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 3.5.2024 from https://www.degruyter.com/document/doi/10.1515/icom-2022-0005/html
Scroll to top button