Skip to content
Publicly Available Published by Oldenbourg Wissenschaftsverlag August 6, 2020

The Shared View Paradigm in Asymmetric Virtual Reality Setups

  • Robin Horst

    Robin Horst is a research associate at the Dept. Design, Computer Science, Media of the RheinMain University of Applied Sciences in Wiesbaden, Germany. He received his Master’s degree in Human-Centered Computing with specialization in Medical Informatics from the Reutlingen University in Reutlingen, Germany. His research interests are in the field of interactive technologies, especially in the development of authoring tools for Virtual Reality and Computer Games in the educational sector.

    EMAIL logo
    , Fabio Klonowski

    Fabio Klonowski is a student at the RheinMain University of Applied Sciences in Wiesbaden, Germany. He received his Bachelor’s degree at the RheinMain University of Applied Sciences in 2020. His study interest is Computer Graphics with a focus on user-oriented software development and concepts.

    , Linda Rau

    Linda Rau is a research associate at the Dept. Design, Computer Science, Media of the RheinMain University of Applied Sciences in Wiesbaden, Germany. She obtained her Master’s degree at the Mainz University of Applied Sciences in 2019. Her research is concerned with Augmented Reality experiences and concentrates on their immersion and authoring process.

    and Ralf Dörner

    Dr. Ralf Doerner is a professor for Computer Graphics and Virtual Reality at the Dept. Design, Computer Science, Media of the RheinMain University of Applied Sciences in Wiesbaden, Germany. He is the director of the university’s Visualization Lab. His research interests lie in the field of Mixed Reality technology, esp. novel interaction techniques and authoring methodologies, e. g. for visualization or Serious Games.

From the journal i-com

Abstract

Asymmetric Virtual Reality (VR) applications are a substantial subclass of multi-user VR that offers not all participants the same interaction possibilities with the virtual scene. While one user might be immersed using a VR head-mounted display (HMD), another user might experience the VR through a common desktop PC. In an educational scenario, for example, learners can use immersive VR technology to inform themselves at different exhibits within a virtual scene. Educators can use a desktop PC setup for following and guiding learners through virtual exhibits and still being able to pay attention to safety aspects in the real world (e. g., avoid learners bumping against a wall). In such scenarios, educators must ensure that learners have explored the entire scene and have been informed about all virtual exhibits in it. According visualization techniques can support educators and facilitate conducting such VR-enhanced lessons. One common technique is to render the view of the learners on the 2D screen available to the educators. We refer to this solution as the shared view paradigm. However, this straightforward visualization involves challenges. For example, educators have no control over the scene and the collaboration of the learning scenario can be tedious. In this paper, we differentiate between two classes of visualizations that can help educators in asymmetric VR setups. First, we investigate five techniques that visualize the view direction or field of view of users (view visualizations) within virtual environments. Second, we propose three techniques that can support educators to understand what parts of the scene learners already have explored (exploration visualization). In a user study, we show that our participants preferred a volume-based rendering and a view-in-view overlay solution for view visualizations. Furthermore, we show that our participants tended to use combinations of different view visualizations.

1 Introduction

In the age of new digital realities, educational tasks can be supported by different types of technologies, such as Virtual Reality (VR), Augmented Reality (AR), or common desktop systems. For certain learning scenarios, it can be beneficial to involve multiple of these technologies simultaneously. For example, blended learning systems [22] can draw from both e-learning methodologies and the physical presence of educators and learners. In a setup that utilizes VR, learners can use immersive VR technology to explore a virtual scene, whereas educators may use a desktop PC to comply with necessary safety aspects during a lesson (e. g., prevent learners that wear head-mounted displays (HMDs) tripping over cables). Through the desktop integration, educators can still follow and guide the learners through the virtual world (e. g., [41]). Thus, multi-user digital reality systems can blend different degrees of realities [38] in such asymmetric VR setups for distributed learning applications.

In these collaborative scenarios, it is elementary that educators can understand at runtime what learners are seeing to improve collaborative task performances [5]. Furthermore, it might be necessary for educators to ensure that learners have explored the entire scene and have been informed about all virtual exhibits in it. Suitable visualization techniques can support educators and facilitate conducting such VR-enhanced lessons. For example, while learners explore an educational VR scene, supporting visualizations can allow educators who use a common desktop PC to monitor the learner’s VR experience and provide fitting tutoring instructions. As the educators are not immersed in a VR, they are also able to observe the learners in the real environment.

One common solution to this challenge would be to render the same view that the learners get inside the HMD on a 2D screen and give the educators access to the virtual scene [18], [30], [32], [20], [52], [19]. Both users share one view. We refer to this as the shared view paradigm. It is a straightforward approach for current VR hardware and the accompanying software development kits (SDK), such as the HTC Vive [9] or the Oculus Rift [12] with steam VR [10] and the Oculus SDK [13] respectively.

However, the shared view paradigm involves challenges. The asymmetric experience of the virtual world is one-sided. The learner has full control over the view, whereas the educators can only perceive the view passively. Their view position and direction fully depend on the learner’s movement. Crucial tasks such as looking around in the scene or retracing what parts of the learner has already explored are not achievable for educators.

Giving educators an autonomous, decoupled interface to the VR scene using established technology such as keyboards and mouse devices could be helpful [52]. Such interfaces are also common for navigating in a first-person perspective within 3D computer games [21]. Using a single window, the educators can navigate through the virtual world independently of the learners. However, educators do not perceive the gaze of the learner anymore. To meet the challenges of the shared view paradigm, educators must be provided with a decoupled interface that still provides them with information about the gaze of learners, for example, a visualization of the learner’s view. There are indeed some common solutions to this challenge, which are used in related work, such as a view frustum visualization [52], [42], [43] or a view in view technique [33]. However, these techniques are only side-aspects of the original work so that their suitability to be applied for the shared view paradigm remains largely unexplored. In this paper, we make the following contributions:

  1. We explore challenges and alternative solutions to the shared view paradigm in an asymmetric Virtual Reality setup. More common visualization approaches are considered as well, to investigate their potential and suitability within the educational use-case. We consider aspects that enable educators at runtime to understand what the learners currently see and what they already have explored during the session.

  2. We investigate five techniques that can be used as alternatives to the shared view paradigm. These techniques support educators to understand the gaze of the learners (view visualizations) at runtime. All considered techniques still allow educators to orient themselves and move in the virtual world decoupled from the learners’ view. Furthermore, we introduce three techniques that help educators to comprehend what parts of the 3D scenery the learners have already explored and which parts they do not have seen, yet (exploration visualizations).

  3. Based on a prototype implementation of the techniques, we state lessons learned and give advice on how to design asymmetric virtual environments. We give practical insights into how modern VR authoring tools, such as game engines, can be used to implement our visualizations. We show how we extended an existing VR project.

  4. We evaluate our view visualization techniques within a user study to show which techniques were most suitable for an application in asymmetric VR setups. We conclude on the potential of different possibilities to address the shared view paradigm and make an informed decision on which techniques should be used.

This paper is organized as follows: The next section discusses related work. In Section 3, we explore techniques that support educators and address the shared view paradigm. In the fourth section, we describe their implementation and show their feasibility. Thereafter, we state the evaluation of the techniques within our user study. Section 6 provides a conclusion and points out directions for future work.

2 Related Work

In this section we state related work. We focus on work about asymmetric VR setups and the inclusion of desktop PC interfaces, as well as gaze visualizations regarding VR.

2.1 Asymmetric Virtual Reality Setups

The importance of awareness of other users within virtual environments is already accepted [3]. Benford and Fahlén [2] describe this importance at the example of a virtual environment where all users use the same hardware to access the environment. They propose a spatial model of awareness where the view of users, the focus, is regarded as highly relevant for the interaction between them. The model is implemented in several systems, such as MASSIVE [23], where they show how different users interact concerning the avatar representation and their degree of immersion. Furthermore, work by Billinghurst and Kato [4] shows the relevance of the focus for AR. They argue that a rising degree of immersion diminishes the focus of users and restricts the cooperation within such symmetric immersive virtual environments (see [50] for further reference to asymmetry in the field of VR). Further work by Billinghurst et al. [5] describes the shared space concept, which is the application of AR in the area of computer-supported cooperative work. They point out that visual cues of users can increase the communication bandwidth and improve the performance to complete collaborative tasks. These insights suggest that the knowledge about what other users can see is of high relevance for asymmetric VR setups, as well, and that users may benefit from this knowledge. This motivates the research of visualizations of the learners’ view in an asymmetric setup during runtime.

Early work in the area of multi-user environments by Broll [6] proposes a trainer-trainee-system. Here, both the trainer and the trainee are part of the immersive environment. They experience it by using one VR HMD each. On the contrary, Roo and Hachet [47] investigate in multi-user environments that combine different mixed reality (MR) [38] modalities. In their work, users can choose between six different hardware setups, ranging from physical objects over projector-based AR to fully immersive VR HMDs. In all setups, users interact with the virtual world with spatial actions. Consequently, the authors describe necessary awareness considerations in such a setup to support the users. They propose to render what the HMD shows on an interactive picture so that users with a lower immersive experience can perceive what the fully immersed users see. None of their user roles utilizes a common desktop PC to get insights into the scene.

Peter et al. [41] propose a dedicated tool for a specific user role in asymmetric VR setups that they call VR-guide. As the name implicates, VR-guides try to guide immersed users through the experience of a virtual environment. The authors illustrate VR-guides in a scenario like the one we illustrated in Section 1. VR-guides use desktop PCs to get insights in the VR scene of the immersed VR users. They distinguish five areas that their proposed tool supports VR-guides in. Two of them relate to the actual view of learners. At first, they provide the optional feature to adopt the learners’ view by rendering what the users see on the desktop PC. Second, they mention features for monitoring learners. The authors state that additional visualizations, such as the exploration state of the scene, would also be helpful. However, the authors focus on implementing and evaluating interactive highlighting techniques in this paper, so that alternative solutions for the shared view paradigm remain unmentioned.

Recent work by Horst et al. [28], [29] also explores multi-user environments that incorporate both users that are fully immersed and users that utilize desktop PCs, like the VR-guide. They propose a texture-based solution to visualize avatars of co-located users within their virtual surroundings and render them on a 2D screen at runtime. A Kinect [37] camera and its associated real-time segmentation algorithm [49] are utilized to capture the learners’ appearances from a third-person point of view (POV). They track the camera using an external device to merge the physical space of the camera with the virtual space of a virtual camera and composite the textures using depth compositing. They provide educators the freedom to relocate the physical camera and get insights on both the virtual scene and the learner in it. As the authors focus on realistic avatar-representations, abstract view or gaze visualizations are not considered in this work.

2.2 Gaze Visualization and Virtual Reality

There also exists work about gaze visualization relating to VR technology. Stellmach et al. [51] transfer established gaze visualizations techniques into three-dimensional virtual environments. These are advanced 3D scan paths and 3D attentional maps (heat maps), which they visualize based on pre-recorded eye-tracking data. Löwe et al. [35] propose specialized visualizations and an according analytics framework to evaluate head movement and gaze data for immersive video technology. They include view similarity visualization that highlights viewing areas. Clay et al. [7] propose measurement and visualization techniques for exploration data and behavior data of learners. They record basic variables based on a fixed frame rate and save it for later evaluation. Within their visualizations, they include certain hit points in the 3D space at which users have looked a certain amount of time. They color-code these points based on distance measures from which they have been looked at. These works provide insights into the virtual content and the view of learners on a 2D screen. They enable experts to analyze the data a posteriori to the actual usage of the VR technology. The perception of gaze data during an ongoing experiment is not mentioned by the authors.

Figure 1 
              A 3D scene (a), the original POV of the learner and the educator (b and c), and five illustrations (d–h) that show the POV of the educator during the visualization of the learner’s view.
Figure 1

A 3D scene (a), the original POV of the learner and the educator (b and c), and five illustrations (d–h) that show the POV of the educator during the visualization of the learner’s view.

We have already mentioned the existence of work that utilizes common view visualizations in collaborative VR setups [52], [42], [43], [44], [33]. Lee et al. [33] propose a view in view technique that enables users with AR glasses or VR HMDs to perceive each other’s view. They render a view plane within the 3D environment so that both immersed users get a certain amount of what Lee et al. call ‘view awareness’ of each other. Tait and Billinghurst [52] use a view frustum visualization within the virtual environment to model the view and the head pose of remote users. Piumsomboon et al. [42], [43] make use of similar view frustum visualizations to give their participants awareness cues of what their collaborating co-users are seeing. Furthermore, Piumsomboon et al. [44] combine this visualization with avatar representations of the co-users. However, these works either focus on visualizations for fully immersed users or utilize the techniques as a side aspect and means to an end within their work. The visualization techniques seen in these works remain largely unexplored. Both established and novel techniques must be evaluated to conclude the suitability within an asymmetric VR setup.

3 Alternative Solutions to the Shared View Paradigm

This section investigates alternative view visualizations to the shared view paradigm. Furthermore, we propose a set of exploration visualizations that can be used by educators to see which parts of a virtual scene has already been explored by their learners.

3.1 View Visualizations

View visualizations are visual cues that support educators in our asymmetric VR learning scenario to understand what the learners are currently seeing. We consider five visualizations, illustrated in Figure 1d–h. For all of them, we allow the educator to navigate through the virtual scene freely. All view visualizations are only visible for educators – learners experience the VR without any additional visualizations.

The first technique we investigate is the volume technique (Figure 1d). Volume makes use of the camera intrinsics that are configured within the virtual camera that renders the view of the learner for the HMD. Based on these parameters and the camera position, a 3D view frustum is rendered in the view of the educator to illustrate which parts of the scene the learner can see at this moment. The length of the volume should be adjusted to a semantically reasonable length considering the content of the scene. For example, an indoor scene might only need a shorter volume than an exterior scene. Objects of interest could be farther away but still be visible for the learners outdoors, whereas an overlong frustum might cut walls through which they cannot see. As a help to find a reasonable length, level of detail (LOD) information could be used as they also include semantic information of the scene author and are already used to describe a variety of 3D scene properties [53], [14], [48].

View in view is a straightforward technique that shows the field of view (FOV) of learners in one corner of the educator’s FOV (Figure 1e). The FOV in a VR is the area of the virtual world that is seen by a user at a given time. The aspect ratio of the image overlay must be adjusted compared to the original HMD texture. Otherwise, the image texture could either be too small to perceive details or too big so that it covers a major portion original educator’s FOV. These adjustments depend on the ratio and resolution of the desktop screen and the HMD screen. By providing users a resizable window, they can adjust it during use.

Figure 2 
              Three illustrations that depict our proposed techniques for visualizing the exploration-state of a scene on a desktop screen.
Figure 2

Three illustrations that depict our proposed techniques for visualizing the exploration-state of a scene on a desktop screen.

The monochrome technique transfers the information that learners see from their point of view (POV) within the FOV of the educator by coloring it. Pixels that show areas that the learner can see are displayed in the original color, whereas all areas that are solely visible to the educator are depicted in a monochrome color palette. Therefore, this technique visualizes the intersections of the two views, as illustrated in Figure 1f.

Minimap (Figure 1g) displays a top-down map in one corner of the desktop screen. The learner’s gaze is represented by a triangular shape on this map, in the style of a 2D view frustum. In this technique, we also explicitly visualize the view of the educator in the same way as the learner’s view. It can help educators to orient themselves in the virtual world with respect to the learner. This technique is also able to show intersecting areas.

Figure 1h shows blueprint – a technique that uses a similar top-down map as minimap. Again, both views are visualized. Instead of showing textured scene objects, blueprint presents outlined shapes in uniform colors for selected objects of the scene. This map holds a certain abstraction so that users can concentrate on important objects rather than seeing all elements of a scene. Therefore, the blueprint approach requires a selection of important aspects of the scene before usage. These aspects can be landmarks of the scene that help educators to orient themselves within the scene, but also important exhibits that should be explored by the learner. Scene authors must choose which of the objects should be visualized in consultation with the educators. Based on this consultation, the scene author creates the blueprint map.

All the above-mentioned techniques are non-exclusive. They provide the opportunity to be combined. For window-based visualizations, such as view in view, minimap, and blueprint, multiple windows can be arranged on the desktop screen. The scene-based visualizations (monochrome and volume) can be rendered simultaneously in the same view. A combination of window-based and scene-based visualizations is a concatenation of these techniques.

3.2 Exploration Visualizations

Educators can be supported by providing them insights in the exploration-state of the scene while collaborating within the asymmetric VR setup. We explore three different exploration visualizations that are suited for usage during runtime on a desktop screen.

The minimap fog technique draws on common methodology from the area of third-person/top-down strategy games, called fog of war. Here, a map is placed on top of the educators’ FOV (Figure 2a). It is similar to the minimap or the blueprint visualizations which can both be used as a basis for this technique. The map is initially covered with gray ‘fog’. Only regions that have already been discovered by the learner are shown in their original coloring. The fog is removed little by little when the learner explores the 3D scene. Therefore, educators can see which parts of the scene were not explored yet and can guide the learners to these areas to ensure a full exploration.

Fog of war is usually used on a 2D map, as stated above. But we also transferred the methodology to the three-dimensional space for visualizing the exploration-state of the scene. We call this technique 3D fog. It is illustrated in Figure 2b. Despite projecting the map metaphor on a 2D plane, we fill the 3D room with fog and remove it in the areas that have been seen by the learner. Like the volume technique (Figure 1d), a view volume has to be defined which makes the fog disappear on intersecting with the volume boundaries. As for the volume technique, the size of the underlying view frustum has to be defined with respect to key aspects of the virtual scene. If the frustum is too big, areas that have not been actively explored might be revealed too soon. If it is too small, areas already observed might not have been marked. For this technique, it is also important that the fog is not too dense even at long distances. Three-dimensional fog otherwise may leave educators seeing only a black screen when they look into directions that have not been seen by the learner.

Figure 3 
              Four screenshots of our prototype that illustrate the investigated view visualizations. View in view and monochrome are shown as a combined version in (a).
Figure 3

Four screenshots of our prototype that illustrate the investigated view visualizations. View in view and monochrome are shown as a combined version in (a).

The last exploration technique is halo (Figure 2c). Here, halo-effects are placed within the 3D scene. A halo signals that the area where it is placed is not yet explored by the learner. When explored, the according halo vanishes or tints. Again, a certain view volume is needed to specify when a section of the 3D scene is explored. Compared to its predecessors it does not visualize the entire area that has been explored but focuses on distinct exhibits or points of interest, like the blueprint abstraction mentioned before. Similar to blueprint, these distinct points have to be set before usage.

4 Prototype

We implemented all view and exploration visualization techniques to show their feasibility. We investigated how the techniques can extend an existing VR scene. A scene from the educational domain was utilized. A large industrial showroom with two floors and indoor and outdoor facilities contained several exhibits that informed learners about the use, construction, and physical model of a fuel cell. In the original VR scene, educators were included in the virtual environment by observing the render texture from the HMD mirrored on a desktop screen according to the shared view paradigm.

The prototype was implemented using the Unity game engine [54]. We utilized the Virtual Reality Tool-Kit (VRTK) [36] for implementing the learner interface. We used a distributed system design and displayed the original VR scene on an Oculus Go standalone VR HMD [11]. The application for the observing educator ran on a Windows gaming laptop with an i7 processor, 8 GB RAM, and a dedicated GeForce GTX 1060m GPU. The applications were connected within a private network. The location and virtual camera parameter (e. g., intrinsics) of the VR application were synchronized using Unity’s integrated UNet library. The VR app was used as a server whereas the educator connected to it as a client. By adjusting the intrinsics at runtime, our prototype can work with VR HMDs that have other specifics, as well.

4.1 View Implementation

View in view (Figure 3a) was implemented using Unity UI components to create a canvas in the upper left corner. In addition to the camera for the educator, we created a second camera based on the synchronized data from the learner’s scene. This camera renders to a Unity rendertexture which then is projected on the UI canvas. For our prototype, we adjusted the canvas to a fixed size. We use the aspect ratio of the desktop screen as the determining factor and used 1/3 of both its width and height. The fixed size ensured that educators both were able to see details within the canvas and having enough space to navigate and perceive their own POV.

Based on the synchronized camera intrinsics, we create a view frustum object for the volume visualization (Figure 3b). This volume is rendered with a semi-transparent material so that the visualization does not occlude objects that learners currently see. We used a distance of 2 m for the ‘far clipping plane’ of the frustum to portray the visual attention range of the learner. This value was an approximation appropriate for the combined indoor/outdoor scene that we used.

The monochrome technique was implemented with three render textures and two shaders. Two of the textures reflect the current view of the educator – one colored and one post-processed in monochrome color. The third is a binary black and white mask which is rendered again from the educators’ POV. The first two textures are composited together with one of the shaders using the mask which contains the intersection information. The intersection reflects the parts of the scene that are seen by both the educator and the learner. The second shader is used to create the mask texture. Here, we utilize a view frustum, such as the volume implementation. This time it is not rendered, but it is used for calculating and determining which parts of the scene are seen by the learner. These parts of the scene are then rendered in the mask texture, in white, while the remaining areas of the textures are ignored and left black. When combined altogether, the generated texture presents the intersections of both views colored. The rest of the scene is displayed in monochrome (Figure 3a). This texture is used as the final texture that is rendered on the desktop screen using Unity’s Blit() method.

We implemented minimap (Figure 3d) by capturing the scene with a separate orthographic camera. It was placed vertically above the position of the desktop player. One challenge was related to the indoor area of our scene and the actual height of the camera position. Two solutions could be pursued to meet this challenge. First, the map camera could have been placed on a height that would cover an appropriate area of the scene with moderate orthographic camera frustum measures that determine the viewing volume for Unity cameras. But in our case there were objects in the outdoor areas that had a height larger than the height of the second floor. So, to capture these objects from above, the camera had to be at least as high as these objects were tall. Therefore, some objects like the roof, the gable, and parts of the second floor had to be put on a separate Unity layer. This layer is marked not to be rendered by the map camera since the camera follows the educator’s position. Otherwise, objects on the lowest floor would have been occluded by the roof of the second floor. The second solution would place the camera in a fixed and short distance above the educator’s head position and then crank up the size of the camera to cover a larger part of the scene. This counteracts that the roof or the second floor would occlude underlying objects, but objects larger than the camera height would be cut off on the map texture. Therefore, we decided to implement the first solution and customized the scene by creating a separate Unity layer for elements that should be ignored by the map camera.

A similar methodology as for minimap was used for implementing the blueprint technique (Figure 3c). We placed Unity standard primitives with a blue material at positions of important objects of the scene. These primitives then were scaled accordingly, and they were put on a separate Unity layer which was exclusively rendered by the map camera.

4.2 Exploration Implementation

For the halo exploration (Figure 4a), we utilized the Unity halo component on empty game objects. This component controls the size, color, and position. We positioned the halos manually at places of interest, analogous to the places where we positioned primitives for the blueprint visualization. For larger objects, we did not place halos at the volumetric centers of the objects but at suitable positions next to them, like in front of a chalkboard or on top of a table.

The minimap fog implementation (Figure 4b) utilized the map from the blueprint technique. The minimap technique itself could have been used alternatively, but to have fewer scene objects and less visual clutter on the abstract blueprint map, we decided to use this one. To visualize the fog, we spawned a set of coherent semi-transparent cubes that were only visible for the map camera. We controlled the resolution of the fog by the size of the cubes. The fog cubes were positioned perpendicularly between the map camera and the object primitives. A view volume of a given size was positioned on the same height. The fog cubes were disabled once they came in contact with the volume.

Figure 4 
              Screenshots of the three proposed exploration state visualizations. 3D Fog is illustrated in two separate pictures. On the left, the educator stands within the explored area looking towards non-explored space, whereas on the right, the user stands in unexplored areas and looks towards already explored ones.
Figure 4

Screenshots of the three proposed exploration state visualizations. 3D Fog is illustrated in two separate pictures. On the left, the educator stands within the explored area looking towards non-explored space, whereas on the right, the user stands in unexplored areas and looks towards already explored ones.

We used a similar methodology to visualize the 3D Fog (Figure 4c/4d). Instead of creating one layer of flat cubes, we spawned cubes continuously attached to each other as a uniform honeycomb that filled the entire 3D scene space. We utilized a 3D volume as described above to disable the cubes that are intersecting with the view of the learner. Again, the resolution of the fog can be adjusted by the actual size of the cubes. In our prototype, we used a size of approximately 1 m2 as we experienced several seconds loading time for this visualization with a higher amount of fog cubes. The actual FOV of a educator that stands in an already explored area and looks towards an unexplored one is shown in Figure 4c. The reverse case, where a educator stands in unexplored areas and looks towards explored areas is shown in Figure 4d. These pictures illustrate the importance of using non-additive fog elements. Looking through multiple additive textures otherwise would obfuscate farther objects and envelop the user’s view in black.

5 Evaluation

By conducting a user study, we evaluated our view visualizations that address the shared view paradigm in the asymmetric VR setup. We examined five aspects during the study to gain more insights about our proposed techniques:

  1. ] Line of sight

  2. ] Field of view

  3. ] Reliability

  4. ] Disturbance

  5. ] Product character [24]

The first aspect that we consider is the viewing direction of learners which we refer to as the line of sight (LOS) [40]. The LOS of users is a significant characteristic in collaborative virtual environments. It can enhance the face direction information for users [31] and is changed very frequently throughout the usage of a VR [34]. If users can understand the LOS of other users in a collaborative VR setup, it can help them to understand each other’s views. We evaluate how our participants could perceive the LOS of learners within our view visualizations and use this as the first quality criterion for view visualizations in VR.

The second aspect we evaluate is the field of view of learners. While the LOS can help educators to grasp the general orientation of learners, an explicit visualization of the FOV can enable them to understand more accurately what the learners can see, instead of deducing it solely from the viewing direction [16], [15]. Therefore, we chose the awareness of the learners’ FOV as the second criterion for evaluating view visualizations within an asymmetric VR setup.

The reliability of our techniques serves as an indicator of whether a specific technique provides our participants with the ability to understand the gaze of learners continuously or intermittently. For example, some techniques require educators to look towards the learner’s avatar to perceive the view (e. g., volume). Therefore, situations can occur when the view visualization is not present (e. g., when the educator looks away from the learner’s avatar). With reliability, we conclude whether such discontinuities affect the quality of view visualizations concerning the reliability.

Generally, additional visualizations that are not part of the original content of VR scenes, such as our view visualizations, may impact the visual perception of educators. Sophisticated or prominent visualizations may cause a certain disturbance (e. g., visual clutter). Therefore, we capture the amount of disturbance for each technique.

At last, the product quality [24] can be used to draw conclusions about the pragmatic qualities (usability) and hedonic qualities [27], [26] of our techniques. Usability is an established and significant criterion in the evaluation of user interfaces. Hedonic qualities enable us to make assumptions about the educators’ desire for pleasure and avoidance of boredom during the usage of our techniques. The AttrakDiff questionnaire [25] is a common tool for measuring these qualities of user interfaces and can be used additionally to other statistical evaluation to characterize digital products such as our techniques.

We formulated the following null hypotheses H10–H40 and alternative hypotheses H1a–H4a based on the discussion of our individual aspects A1–A4. The standardized A5 is treated separately. With these hypotheses, we evaluate to which degree educators felt supported by the individual techniques.

  1. ] The techniques allow the line of sight to be perceived equally well.

  2. ] The techniques do not allow the line of sight to be perceived equally well.

  3. ] The techniques allow the field of view to be perceived equally well.

  4. ] The techniques do not allow the field of view to be perceived equally well.

  5. ] The techniques are equally reliable.

  6. ] The techniques are not equally reliable.

  7. ] The techniques disturb equally.

  8. ] The techniques do not disturb equally.

We derived the following questions Q1–Q4 from the hypotheses and use them as measures for A1–A4. The product character (A5) is measured by the established AttrakDiff questionnaire regarding pragmatic and hedonic qualities.
  1. ] How well was the viewing direction of the other person recognizable?

  2. ] How well was the other person’s field of view recognizable?

  3. ] Did you know at all times what the other person was looking at?

  4. ] How much did you feel disturbed by the gaze visualization?

5.1 User Study

The user study involved 21 unpaid, voluntary participants (aged between 21 and 58 years with Ø 31,25 and SD 12,95, 5 females). Their VR experience was captured on a 4-point scale with Ø 1,15 and SD 0,91, where 0 means they never have used VR technology, and 3 means they regularly use VR. On that basis, we classify the participants of our study as non-specialists in the field of VR.

We used a 24-inch screen for our study. The prototype, as described in Section 4, ran on a gaming computer with an i7 processor, a GeForce GTX 1070 GPU, and 8 GB RAM. An Oculus Go was used as VR technology in this setup. The study was performed within a controlled laboratory environment. The real-world setup is illustrated in Figure 5.

Figure 5 
              A photo of the real-world setup of our study. One of the experimenters used the Oculus Go HMD while the participant used the view visualization prototype.
Figure 5

A photo of the real-world setup of our study. One of the experimenters used the Oculus Go HMD while the participant used the view visualization prototype.

The procedure of the study was as follows. At first, participants were welcomed and then informed about the asymmetric VR learning scenario and challenges for educators of knowing what learners currently observe. They were introduced briefly to the user interface of the prototype and the keyboard controls. Thereafter, an experimenter assigned the participants the task to retrace and understand the view of an experimenter who took on the learner role and would use the VR technology and explore the virtual scene. We used a randomized block design to encounter the impact of a possible learning process on the scores throughout the study. So, each participant experienced the visualization techniques in different orders. The participants used each of the five view visualization techniques for about 4 minutes and observed an experimenter that explored a VR scene with several points of interest, as described in the prototyping section. They were asked to fill out a questionnaire in between each of the visualizations.

This questionnaire incorporated the questions Q1–Q4 and the abbreviated version of the AttrakDiff questionnaire [27], [25]. Our participants used a version of the questionnaire which was translated into their native language. We used a 7-point semantic differential scale for each of the items in the questionnaire to capture the scores. As all techniques that we explored have the potential to be used in a combination, we asked the participants to enable and disable the view visualizations until they reached a configuration that they found most suitable after their experiences they have made.

At the end of the user study, the participants were requested to fill out a post-study questionnaire that asked them to bring each of the visualizations in order from best to worst. This brief post-study questionnaire was also used to capture demographic data. A single session of the study was performed within the time frame of one hour.

5.2 Analysis of the Results

We performed non-parametric and dependent Friedman tests [17] on the items Q1–Q4 to test the data on significant differences. With a threshold for statistical significance of 5 %, all seven tests show that there were significant differences with p0.00001. We conducted further post-hoc tests (after Conover [8]) to identify between which techniques the significant differences occurred. Table 1 shows the output of the Friedman and the post-hoc tests. It also shows the absolute mean values for each of the techniques regarding each aspect, rounded up on four decimal places. The p-values from the post-hoc tests are shown in Table 2.

Table 1

Output of the Friedman and the post-hoc tests for Q1–Q4. Furthermore, mean scores are shown for each of the techniques regarding each aspect, rounded up on four decimal places. 0 is the lowest and 6 is the highest possible mean score.

Questions Techniques Ø-scores Different from variable number (with p0.05)
Q1 (1) View in view 5.4286 (3), (4)
(2) Volume 5.5238 (3), (4)
(3) Monochrome 2.7143 (1), (2), (4), (5)
(4) Blueprint 3.5238 (1), (2), (3), (5)
(5) Minimap 5.4286 (3), (4)
Q2 (1) View in view 5.4286 (3), (4), (5)
(2) Volume 5.5714 (3), (4), (5)
(3) Monochrome 1.9048 (1), (2), (5)
(4) Blueprint 2.2857 (1), (2), (5)
(5) Minimap 4.8095 (1), (2), (3), (4)
Q3 (1) View in view 5.3333 (2), (3), (4), (5)
(2) Volume 4.5238 (1), (3), (4)
(3) Monochrome 2.2857 (1), (2), (5)
(4) Blueprint 2.2857 (1), (2), (5)
(5) Minimap 4.5238 (1), (3), (4)
Q4 (1) View in view 4.0000 (2)
(2) Volume 5.9048 (1), (3), (4), (5)
(3) Monochrome 3.0000 (2)
(4) Blueprint 4.0952 (2)
(5) Minimap 3.6191 (2)

Table 2

Rounded p-values resulting from the post-hoc tests with a threshold for statistical significance of 5%.

Q1 View in view Volume Monochrome Blueprint
Volume 0.398
Monochrome ≤ 0.001 ≤ 0.001
Blueprint ≤ 0.001 ≤ 0.001 ≤ 0.001
Minimap 0.888 0.325 ≤ 0.001 ≤ 0.001
Q2 View in view Volume Monochrome Blueprint
Volume 0.32
Monochrome ≤ 0.001 ≤ 0.001
Blueprint ≤ 0.001 ≤ 0.001 0.32
Minimap ≤ 0.001 ≤ 0.001 ≤ 0.001 ≤ 0.001
Q3 View in view Volume Monochrome Blueprint
Volume ≤ 0.001
Monochrome ≤ 0.001 ≤ 0.001
Blueprint ≤ 0.001 ≤ 0.001 0.151
Minimap ≤ 0.001 0.665 ≤ 0.001 ≤ 0.001
Q4 View in view Volume Monochrome Blueprint
Volume ≤ 0.001
Monochrome 0.286 ≤ 0.001
Blueprint 0.541 ≤ 0.001 0.095
Minimap 0.476 ≤ 0.001 0.721 0.187

Regarding the perception of the line of sight (A1), Table 1 depicts that view in view, volume and minimap have similarly high values and all differ significantly from the lower-rated blueprint and monochrome techniques. The field of view aspect (A2) was rated above 5 points in the mean for view in view and volume, above 4 points for minimap and below 2.5 points for blueprint and monochrome. View in view differs significantly from the remaining techniques, whereas minimap differs significantly from all other techniques, and blueprint and monochrome differ from the first three. So, view in view and volume were rated significantly higher than the other visualizations for A2. The third aspect for the view visualization (reliability, A3) shows that view in view was given the highest mean score of 5.3333. View in view also differs significantly from the other four visualizations. Volume and minimap rank second having higher values than blueprint and monochrome. Volume and minimap also differ significantly from the latter two. Regarding the disturbance of the view visualization (A4), volume received the highest score. The score differs significantly from the remaining four techniques. The lowest value at this aspect was obtained by the monochrome technique with 3.000. It only differs significantly from the volume technique.

Figure 6 
              The portfolio presentation compares the five view visualization techniques regarding hedonic and pragmatic aspects.
Figure 6

The portfolio presentation compares the five view visualization techniques regarding hedonic and pragmatic aspects.

We also analyzed the outcomes of the AttrakDiff questionnaires and compared the view visualization techniques regarding A5. We used descriptive statistics as suggested by the authors of the questionnaire [27], [25]. The portfolio presentation of the view visualization scores (Figure 6) arranges the five techniques regarding their hedonic and pragmatic characteristics [27], [26]. It shows that the minimap technique is closest to the ‘desired’ region. Next to it are volume and view in view. They scored lower hedonic but higher pragmatic quality. Blueprint and monochrome are farther behind in both values and lie within and next to the ‘neutral’ field.

The word-pair visualization of the view visualization techniques (Figure 7) gives further insights into the single items of the AttrakDiff questionnaire. Besides a more detailed reflection of the portfolio presentation, it shows some special features and anomalies of the techniques. Monochrome has two larger amplitudes at the items ‘confusing – predictable’ and ‘unimaginative – creative’. Except for one, all items for monochrome were negatively rated compared to the neutral value of 3 points. However, our participants rated the creativity of the visualization as the highest of all five techniques. For view in view we can observe some larger negative deviations at the items ‘ugly – attractive’, ‘tacky – stylish’ and ‘unimaginative – creative’. All view in view items lie above 3 points, whereas all the blueprint items are located left of the 3-point-line of the graphic. The lines of minimap and volume run visually similar. Six items of the volume technique were rated higher than for minimap and four items of minimap were rated above the corresponding volume items. The largest gaps of these two techniques are located at two items of concerning the pragmatic quality ‘complicated – simple’ ‘impractical – practical’, where the volume visualization obtained higher ratings and one item of the hedonic quality category, where minimap was preferred by the participants (‘cheap – premium’).

Figure 7 
              The description of word-pairs for the view visualization techniques shows the mean ratings of the single items that the AttrakDiff consists of.
Figure 7

The description of word-pairs for the view visualization techniques shows the mean ratings of the single items that the AttrakDiff consists of.

The combination and individual sorting tasks were analyzed, as well. The according data is presented in Table 3 and shows that the combination ‘view in view & volume’ was preferred by ten participants, which was nearly half the number of participants. The second most preferred combination was ‘volume & minimap’ with five preferences. The sorting task was transferred on a scale that ranged from 0 (last place) to 4 points (first place). The results are illustrated in Figure 8. The box-whisker plot shows that the interquartile range of the volume and view in view techniques begin directly at the highest score limit.

Table 3

The table shows the individually preferred view combinations and the corresponding quantity for how many participants preferred them.

View visualization combinations Quantity
View in view & volume 10 participants
Volume & minimap 5 participants
Minimap & monochrome & volume 2 participants
View in view 2 participants
Volume 1 participant
View in view & minimap & volume 1 participant

Figure 8 
              The graphic shows the individually preferred view visualizations.
Figure 8

The graphic shows the individually preferred view visualizations.

5.3 Discussion of the Results

All Friedman tests have shown significant differences. Based on these results, we reject all null hypotheses H10–H40 and accept the corresponding alternative hypotheses H1a–H4a. The techniques differ regarding the proposed quality criteria A1–A4.

Further results of our user study show how the view visualization techniques differ. The three techniques view in view, volume and minimap were more suitable for observing what learners see within the asymmetric immersive VR setup than blueprint or monochrome. For the aspects A1–A3, the first-mentioned techniques performed significantly better than their alternatives. Regarding the visual disturbance of the view visualizations (A4), the volume visualization was perceived significantly less disturbing by our participants than the other four. The AttrakDiff analysis supports this claim, as it shows a considerably high product character (A5) score for volume, especially regarding the pragmatic usability.

The single preference rating in Figure 8 suggests that our participants liked view in view and volume most. Furthermore, this trend is established with the combination preferences (Table 3). Here we see, that these two were not only chosen over the remaining three visualizations, but they were clearly preferred to be used together, as well.

The study generally shows that future applications similar to our use-case should choose between the volume and the view in view visualization or even use these two techniques combined. Both individual preferences and measures of the questionnaire items support this claim. Using a combination of multiple view visualizations is additionally motivated by the study results since only three of our twenty-one participants chose to use just one visualization, while 15 preferred a combination of two, and also three preferred even a combination of three techniques.

6 Conclusions and Future Work

In this paper, we explored the challenges of the shared view paradigm within asymmetric VR setups. We investigated five techniques that try to meet these challenges. Two were categorized as window-based solutions whereas three of them were in-scene-techniques, and integrated the view visualization within the 3D scene FOV of educators. Furthermore, we proposed three techniques that support educators to understand the state of exploration of a virtual environment at runtime. The implementations of the different visualizations were described with the help of an existing Unity scene that was extended by these visualizations. All techniques could be realized using Unity as a common authoring software for VR applications, and we illustrated how we complemented the existing project.

A user study shows that the volume and the view in view techniques were the most suitable view visualizations for the participants of our study. Furthermore, we could show that the majority of our participants tended to prefer a combination of multiple view visualizations, specifically a combination of two. Moreover, nearly three-quarters of our user group decided to use one window- and one in-scene-based view visualization together. We will explore this phenomenon in future research.

We targeted a 1:1 relationship of educators to learners within our study to conclude about the visualizations themselves without bringing in additional aspects that could distract the participants’ attention. But the next steps might include multiple users per role so that multiple educators could observe multiple learners or one educator could observe multiple learners. Educational applications would benefit from this extension of functionality, as many of the common educational scenarios include multiple learners, such as classroom settings in particular (e. g., as described for virtual classrooms in [46], [39], [45], [1]).

We used an educational scenario to test our techniques. However, we did not incorporate education-specific aspects within our concepts, so that our techniques may even be agnostic to the domain of application. In future work, we will utilize our techniques and explore their suitability within different asymmetric and collaborative scenarios, for example, in distributed maintaining and engineering tasks or supporting experimenters in VR studies.

Future work will also explore the authoring and the automated extension of VR scenes. It was necessary to customize the original Unity scene for some visualizations to work. For example, important objects in the scene were marked by authors and new Unity layers were created. Extending existing VR applications without the assistance of VR experts could bring benefits for a larger audience, specifically since expert support can be costly. Additional costs can prevent educational institutions from taking VR as a medium into consideration. Besides fully self-adjusting extensions, simplified authoring environments could lower this obstacle, as well. Interfaces that can be utilized by educators without programming experience should be taken into consideration for future work.

Award Identifier / Grant number: 03IHS071

Funding statement: The work is supported by the Federal Ministry of Education and Research of Germany in the project Innovative Hochschule (funding number: 03IHS071).

About the authors

Robin Horst

Robin Horst is a research associate at the Dept. Design, Computer Science, Media of the RheinMain University of Applied Sciences in Wiesbaden, Germany. He received his Master’s degree in Human-Centered Computing with specialization in Medical Informatics from the Reutlingen University in Reutlingen, Germany. His research interests are in the field of interactive technologies, especially in the development of authoring tools for Virtual Reality and Computer Games in the educational sector.

Fabio Klonowski

Fabio Klonowski is a student at the RheinMain University of Applied Sciences in Wiesbaden, Germany. He received his Bachelor’s degree at the RheinMain University of Applied Sciences in 2020. His study interest is Computer Graphics with a focus on user-oriented software development and concepts.

Linda Rau

Linda Rau is a research associate at the Dept. Design, Computer Science, Media of the RheinMain University of Applied Sciences in Wiesbaden, Germany. She obtained her Master’s degree at the Mainz University of Applied Sciences in 2019. Her research is concerned with Augmented Reality experiences and concentrates on their immersion and authoring process.

Ralf Dörner

Dr. Ralf Doerner is a professor for Computer Graphics and Virtual Reality at the Dept. Design, Computer Science, Media of the RheinMain University of Applied Sciences in Wiesbaden, Germany. He is the director of the university’s Visualization Lab. His research interests lie in the field of Mixed Reality technology, esp. novel interaction techniques and authoring methodologies, e. g. for visualization or Serious Games.

References

[1] Jeremy N Bailenson et al. “The use of immersive virtual reality in the learning sciences: Digital transformations of teachers, students, and social context.” In: The Journal of the Learning Sciences 17.1 (2008), pp. 102–141.10.1080/10508400701793141Search in Google Scholar

[2] Steve Benford and Lennart Fahlen. “A spatial model of interaction in large virtual environments.” In: Proceedings of the Third European Conference on Computer-Supported Cooperative Work, 13–17 September 1993, Milan, Italy ECSCW’93. Springer. 1993, pp. 109–124.10.1007/978-94-011-2094-4_8Search in Google Scholar

[3] Steve Benford et al. “Managing mutual awareness in collaborative virtual environments.” In: Virtual Reality Software and Technology. World Scientific. 1994, pp. 223–236.10.1142/9789814350938_0018Search in Google Scholar

[4] Mark Billinghurst and Hirokazu Kato. “Collaborative mixed reality.” In: Proceedings of the First International Symposium on Mixed Reality. 1999, pp. 261–284.10.1007/978-3-642-87512-0_15Search in Google Scholar

[5] Mark Billinghurst, Suzanne Weghorst, and T Furness. “Shared space: An augmented reality approach for computer supported collaborative work.” In: Virtual Reality 3.1 (1998), pp. 25–36.10.1007/BF01409795Search in Google Scholar

[6] Wolfgang Broll. “Interacting in distributed collaborative virtual environments.” In: Proceedings Virtual Reality Annual International Symposium’95. IEEE. 1995, pp. 148155.Search in Google Scholar

[7] Viviane Clay, Peter Konig, and Sabine U. Konig. “Eye tracking in virtual reality.” In: Journal of Eye Movement Research 12.1 (2019). doi:10.16910/jemr.12.1.3.Search in Google Scholar PubMed PubMed Central

[8] William Jay Conover. Practical nonparametric statistics. Wiley New York, 1980.Search in Google Scholar

[9] HTC Corporation. HTC Vive product description. https://www.vive.com/. Accessed:30.01.2020.Search in Google Scholar

[10] Valve Corporation. Steam VR SDK description. https://store.steampowered.com/app/250820/SteamVR/. Accessed:30.01.2020.Search in Google Scholar

[11] LLC. Facebook Technologies. Oculus Go product description. https://www.oculus.com/go/. Accessed:30.01.2020.Search in Google Scholar

[12] LLC. Facebook Technologies. Oculus Rift s product description. https://www.oculus.com/rift-s/. Accessed:30.01.2020.Search in Google Scholar

[13] LLC. Facebook Technologies. Oculus SDK description. https://developer.oculus.com/downloads/. Accessed:30.01.2020.Search in Google Scholar

[14] Robert L Ferguson. “Continuous terrain level of detail for visual simulation.” In: IMAGE V Conference Jun. 1990. 1990.Search in Google Scholar

[15] Mike Fraser et al. “Revealing the realities of collaborative virtual reality.” In: Proceedings of the third international conference on Collaborative virtual environments. 2000, pp. 29–37.10.1145/351006.351010Search in Google Scholar

[16] Mike Fraser et al. “Supporting awareness and interaction through collaborative virtual interfaces.” In: Proceedings of the 12th annual ACM symposium on User interface software and technology. 1999, pp. 27–36.10.1145/320719.322580Search in Google Scholar

[17] Milton Friedman. “The use of ranks to avoid the assumption of normality implicit in the analysis of variance.” In: Journal of the american statistical association 32.200 (1937), pp. 675–701.10.1080/01621459.1937.10503522Search in Google Scholar

[18] Lei Gao et al. “An oriented point-cloud view for MR remote collaboration.” In: SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications. 2016, pp. 1–4.10.1145/2999508.2999531Search in Google Scholar

[19] Steffen Gauglitz et al. “In touch with the remote world: Remote collaboration with augmented reality drawings and virtual navigation.” In: Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology. 2014, pp. 197–205.10.1145/2671015.2671016Search in Google Scholar

[20] Steffen Gauglitz et al. “Integrating the physical environment into mobile remote collaboration.” In: Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services. 2012, pp. 241–250.10.1145/2371574.2371610Search in Google Scholar

[21] Kostas Gkikas, D Nathanael, and N Marmaras. “The evolution of FPS games controllers: how use progressively shaped their present design.” In: Panhellenic Conference on Informatics (PCI). 2007.Search in Google Scholar

[22] Charles R Graham. “Blended learning systems.” In: The handbook of blended learning: Global perspectives, local designs. 2006, pp. 3–21.Search in Google Scholar

[23] Chris Greenhalgh and Steve Benford. “MASSIVE: a distributed virtual reality system incorporating spatial trading.” In: Proceedings of 15th International Conference on Distributed Computing Systems. IEEE. 1995, pp. 27–34.Search in Google Scholar

[24] Marc Hassenzahl. “The thing and I: understanding the relationship between user and product.” In: Funology 2. Springer, 2018, pp. 301–313.10.1007/978-3-319-68213-6_19Search in Google Scholar

[25] Marc Hassenzahl, Michael Burmester, and Franz Koller. “AttrakDiff: Ein Fragebogen zur Messung wahrgenommener hedonischer und pragmatischer Qualitat.” In: Mensch & computer 2003. Springer, 2003, pp. 187–196.10.1007/978-3-322-80058-9_19Search in Google Scholar

[26] Marc Hassenzahl, Markus Schobel, and Tibor Trautmann. “How motivational orientation influences the evaluation and choice of hedonic and pragmatic interactive products: The role of regulatory focus.” In: Interacting with Computers 20.4–5 (2008), pp. 473–479.10.1016/j.intcom.2008.05.001Search in Google Scholar

[27] Marc Hassenzahl et al. “Hedonic and ergonomic quality aspects determine a software’s appeal.” In: Proceedings of the SIGCHI conference on Human Factors in Computing Systems. 2000, pp. 201–208.10.1145/332040.332432Search in Google Scholar

[28] Robin Horst et al. “A video-texture based approach for realistic avatars of co-located users in immersive virtual environments using low-cost hardware.” In: VISIGRAPP 2019: proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications: Prague, Czech Republic, February 25–27, 2019 – Vol. 1. GRAPP. Ed. by Ana Paula Claudio. 2019. doi:10.5220/0007311602090216.Search in Google Scholar

[29] Robin Horst et al. “Avatar2Avatar: augmenting the mutual visual communication between co-located real and virtual environments.” In: VISIGRAPP 2019: proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications: Prague, Czech Republic, February 25–27, 2019 – Vol. 2. HUCAPP. Ed. by Manuela Chessa. 2019. doi:10.5220/0007311800890096.Search in Google Scholar

[30] Weidong Huang and Leila Alem. “Supporting hand gestures in mobile remote collaboration: a usability evaluation.” In: Proceedings of HCI 2011 The 25th BCS Conference on Human Computer Interaction 25. 2011, pp. 211216.10.14236/ewic/HCI2011.49Search in Google Scholar

[31] Kiyoshi Kiyokawa, Haruo Takemura, and Naokazu Yokoya. “A collaboration support technique by integrating a shared virtual reality and a shared augmented reality.” In: IEEE SMC’99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No. 99CH37028). Vol. 6. IEEE. 1999, pp. 48–53.Search in Google Scholar

[32] Hideaki Kuzuoka. “Spatial workspace collaboration: a SharedView video support system for remote collaboration capability.” In: Proceedings of the SIGCHI conference on Human factors in computing systems. 1992, pp. 533–540.10.1145/142750.142980Search in Google Scholar

[33] Gun A Lee et al. “Mixed reality collaboration through sharing a live panorama.” In: SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications. 2017, pp. 1–4.10.1145/3132787.3139203Search in Google Scholar

[34] Jiandong Liang, Chris Shaw, and Mark Green. “On temporal-spatial realism in the virtual reality environment.” In: Proceedings of the 4th annual ACM symposium on User interface software and technology. 1991, pp. 19–25.10.1145/120782.120784Search in Google Scholar

[35] Thomas Lowe et al. “Gaze Visualization for Immersive Video.” In: Workshop on Eye Tracking and Visualization. Springer. 2015, pp. 57–71.10.1007/978-3-319-47024-5_4Search in Google Scholar

[36] Extend Reality Ltd. Virtual Reality Toolkit (VRTK). https://github.com/ExtendRealityLtd/VRTK. Accessed:30.01.2020.Search in Google Scholar

[37] Microsoft. Kinect camera description. https://developer.microsoft.com/en-us/windows/kinect/. Accessed:30.01.2020.Search in Google Scholar

[38] Paul Milgram et al. “Augmented reality: A class of displays on the reality-virtuality continuum.” In: Telemanipulator and telepresence technologies. Vol. 2351. International Society for Optics and Photonics. 1995, pp. 282–292.10.1117/12.197321Search in Google Scholar

[39] Pierre Nolin et al. “ClinicaVR: Classroom-CPT: A virtual reality tool for assessing attention and inhibition in children and adolescents.” In: Computers in Human Behavior 59 (2016), pp. 327–333.10.1016/j.chb.2016.02.023Search in Google Scholar

[40] Julius Panero and Martin Zelnik. Human dimension & interior space: a source book of design reference standards. Watson-Guptill, 1979.Search in Google Scholar

[41] Mark Peter, Robin Horst, and Ralf Dorner. “Vr-guide: A specific user role for asymmetric virtual reality setups in distributed virtual reality applications.” In: Proceedings of the 10th Workshop Virtual and Augmented Reality of the GI Group VR/AR. 2018.Search in Google Scholar

[42] Thammathip Piumsomboon et al. “CoVAR: a collaborative virtual and augmented reality system for remote collaboration.” In: SIGGRAPH Asia 2017 Emerging Technologies. 2017, pp. 1–2.10.1145/3132818.3132822Search in Google Scholar

[43] Thammathip Piumsomboon et al. “Exploring enhancements for remote mixed reality collaboration.” In: SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications. 2017, pp. 1–5.10.1145/3132787.3139200Search in Google Scholar

[44] Thammathip Piumsomboon et al. “On the shoulder of the giant: A multi-scale mixed reality collaboration with 360 video sharing and tangible interaction.” In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 2019, pp. 1–17.10.1145/3290605.3300458Search in Google Scholar

[45] Albert A Rizzo et al. “A virtual reality scenario for all seasons: the virtual classroom.” In: Cns Spectrums 11.1 (2009), pp. 35–44.10.1017/S1092852900024196Search in Google Scholar PubMed

[46] Albert A Rizzo et al. “The virtual classroom: a virtual reality environment for the assessment and rehabilitation of attention deficits.” In: CyberPsychology & Behavior 3.3 (2000), pp. 483–499.10.1089/10949310050078940Search in Google Scholar

[47] Joan Sol Roo and Martin Hachet. “One reality: Augmenting how the physical world is experienced by combining multiple mixed reality modalities.” In: Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. 2017, pp. 787–795.Search in Google Scholar

[48] Lori L Scarlatos. “A refined triangulation hierarchy for multiple levels of terrain detail.” In: Proceedings, IMAGE V Conference. 1990, pp. 114–122.Search in Google Scholar

[49] Jamie Shotton et al. “Real-time human pose recognition in parts from single depth images.” In: CVPR 2011. IEEE. 2011, pp. 1297–1304.10.1109/CVPR.2011.5995316Search in Google Scholar

[50] Anthony Steed et al. “Beaming: an asymmetric telepresence system.” In: IEEE computer graphics and applications 32.6 (2012), pp. 10–17.10.1109/MCG.2012.110Search in Google Scholar PubMed

[51] Sophie Stellmach, Lennart Nacke, and Raimund Dachselt. “Advanced gaze visualizations for three-dimensional virtual environments.” In: Proceedings of the 2010 symposium on eye-tracking research & Applications. 2010, pp. 109–112.10.1145/1743666.1743693Search in Google Scholar

[52] Matthew Tait and Mark Billinghurst. “The effect of view independence in a collaborative AR system.” In: Computer Supported Cooperative Work (CSCW) 24.6 (2015), pp. 563–589.10.1007/s10606-015-9231-8Search in Google Scholar

[53] David C Taylor and William A Barrett. “An algorithm for continuous resolution polygonalizations of a discrete surface.” In: Graphics Interface. Canadian Information Processing Society. 1994, pp. 33–42.Search in Google Scholar

[54] Unity Technologies. Unity game engine description. https://unity.com/. Accessed:30.01.2020.Search in Google Scholar

Published Online: 2020-08-06
Published in Print: 2020-08-26

© 2020 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 25.4.2024 from https://www.degruyter.com/document/doi/10.1515/icom-2020-0006/html
Scroll to top button