Abstract
The exploration of complex environments (reconstructed locations, immersive data visualisation, etc.) is one of the primary applications of virtual reality (VR) because of the feeling of immersion and the natural interactions that it provides. When exploration is completely free, users easily become disoriented and frustrated due to multiple factors such as task difficulty, interaction techniques, spatial understanding, immersion breaches, etc. Adaptive VR systems aim to overcome these difficulties and increase performance by providing clues to help the user and delivering effective feedback. Current adaptive VR systems are mainly task-based, meaning that they have explicit knowledge about what users must achieve. We assess a new task-blind approach, which aims to enhance users’ attention and recall processes instead of helping directly with their assignment by trying to infer in real time the task a user is performing. We compare three VR help-system configurations (task-based, task-blind, and no-help) through a controlled user study involving 66 participants. The results show that the group assisted by our task-blind system developed a different behavioural pattern while maintaining similar performance scores in comparison to the task-based approach. This paper shows a new way to offer more flexibility in help-system design and opens up the field of task-blind adaptive VR.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Adaptive VR: Adaptive Interactive Mechanisms (AIM) are used to adapt the VR simulation in real time based on 3 types of input. Figure inspired by the definition in Baker et al. (2022)
Real-time adaptation in virtual reality (VR) is crucial for crafting personalised user experiences and delivering effective feedback. It increases user performance and avoids confusion about the environment and the relevant actions to be carried out inside it. For example, Grant and Magee (1998) investigated how proprioceptive feedback impacts exploration of large virtual environments, and recent works (Venkatakrishnan et al. 2024) continue to study factors contributing to negative experiences during exploration. The question of how systems adapt has garnered significant attention across various disciplines, sparking a surge in research on adaptive VR (Zahabi and Abdul Razak 2020; Zuki et al. 2022). The authors of Baker and Fairclough 2022 have organised adaptive VR into a closed-loop experience with adaptive interactive mechanisms (AIM). This comprises three main steps: data manipulation, user modelling, and the use of AIM to influence user behaviour or experiences. These steps involve three types of inputs: contextual knowledge (i.e. information about the virtual environment, objects and all the related abstract information contained within it—Bowman et al. (2003)), human knowledge (i.e. information about the user, such as physiological data and user preferences), and task knowledge, wherein the latter refers to the system’s understanding of the user’s current assignment. Figure 1 illustrates the closed-loop experience and the associated inputs. At the top of the figure, contextual knowledge and task knowledge are introduced. Subsequently, based on this knowledge and insights extracted from the user’s behaviour (right side of the figure), the system dynamically adapts and gives feedback.
Fischer (2001) describes help systems as systems that highlight functionalities useful for the user’s tasks and that help them avoid getting stuck. To integrate this definition into an adaptive VR context (Baker and Fairclough 2022) and reduce the focus on user tasks, we have redefined the term help system. In this work, the term help system refers to a system implementing AIM to generate feedback in order to support a user goal, assignment or action.
In the realm of help systems, task-based approaches, i.e. help systems that rely on and need task-knowledge input to provide adaptive assistance, are widely used (e.g. Zahabi and Abdul Razak 2020; Hoover and Winer 2021; Fricoteaux et al. 2014). However, in certain fields such as data exploration, where predefined user assignments are not readily available, task-based approaches face limitations. This challenge is particularly pronounced in immersive analytics, a field dedicated to leveraging VR interaction and features for manual exploration and analysis of datasets (Ens et al. 2021; Skarbez et al. 2019). To compensate for the lack of task knowledge, designers often rely on expert knowledge (Hoover and Winer 2021) or physiological data obtained from a complex system of sensors (Bian et al. 2016; Luong et al. 2022) to build their adaptive system. These approaches involve inferring the assignment a user wants to perform, based on expert advice or by using the results from sensors to choose from multiple possible tasks, and then using this inferred task as task knowledge in AIM construction. Even if they overcome some of the problems of task-based approaches, help systems built this way are still limited to a pre-defined list of tasks they learned to recognise in real time.
Task-blind help systems, i.e. without any task knowledge, could overcome this limitation. For example, in an immersive analytics context, such a system could assist users by highlighting important elements within the environment that they may have overlooked during exploration or, conversely, by highlighting elements similar to those frequently observed by a user. More generally, task-blind help systems could provide valuable insights and overcome the key challenges of the field posed by the absence of upfront task knowledge in task-based approaches. A fundamental question thus arises: Can an adaptive VR system be designed to offer assistance by using human knowledge and without any knowledge of user assignments? In this paper, we aim to assist the VR application designer when faced with scenarios in which they are unaware of the assignments to be performed by users.
We propose a solution that analyses cognitive processes, using simple sensors to understand human behaviour instead of inferring tasks. During decision-making, our brains continuously trigger cognitive processes, i.e. mental actions to process knowledge or sensory information, which leads for example to thinking, memory or judgments (Pohl 2022). Without any knowledge (either real-time or preprogrammed) of users’ assignments, our AIM is only based on human and contextual knowledge (green and dark pink in Fig. 1). We aim to recognise two cognitive processes, attention and recall, that are widely used (see Luong et al. (2022)) and Sect. 2.2 for details) to evaluate cognitive activity. The decision to focus on attention and recall in our task-blind help system was driven by several key considerations. Attention and recall are fundamental cognitive processes that are closely linked to effective learning and memory formation (Cowan et al. 2024). Attention involves selectively concentrating on specific information while ignoring other stimuli. By directing users’ attention to relevant elements in the VR environment, we can enhance their exploration strategy and understanding of the scene. Recall, on the other hand, is the cognitive process of retrieving information from memory. It is crucial for tasks that require users to remember and act on specific details encountered during the VR experience. Enhancing recall in a VR environment can help users retain important information, making the experience more meaningful and effective. In the context of our study, focusing on attention and recall allows us to create a task-blind help system that can support users’ cognitive processes without needing explicit knowledge of their tasks. By analysing behaviour, our approach tries to provide assistance by enhancing cognitive activity via sensory feedback (Sadeghian and Hassenzahl 2021; Willett et al. 2022; Kozhevnikov et al. 2018), thus helping users do whatever they are doing.
To answer our main question, we implemented a help system with a task-blind approach that seeks to consider the user’s cognitive activity, and more specifically attention and recall. To compare it to the usual task-based help systems and a control group with no help, we conducted a user study with a small and low-cost physiological setup (head, hand and eye tracking systems only). We selected the use case of crime scene understanding because it is a field in which VR usage has already been studied (Engström 2018) and shown to be useful (Reichherzer et al. 2022). Moreover, ground truth can be easily established in a crime scenario, allowing both help system approaches (with and without task knowledge and the resolution of the crime) to be easily designed, implemented and evaluated. In our experiment, we not only evaluate user performance (see Sect. 4.1 for details on evaluation), but also go a step further by exploring the impact of help system designs on users’ behaviour and cognitive processes.
To summarise, our main contribution is the proposal of a task-blind approach for help system design, which enables users to develop different strategies and behaviours without negatively impacting performance. We conducted a user study to compare this approach to a traditional task-based help system and a control group, focusing on a crime scene understanding use case. Our results provide insights into the effectiveness of this relatively unexplored approach to adaptive VR design.
The remainder of this paper is organized as follows: Sect. 2 reviews related work in the domains of adaptive VR systems and cognitive science applications in VR, positioning our work within these fields. We then detail the system design in Sect. 3, including the implementation of task-based and task-blind help systems and the data collection methods used. The experimental design, including our hypotheses, experimental protocol, participant details, and setup, is outlined in Sect. 4. Section 5 presents the results of our user study, analyzing the performance and behavioral data collected. A discussion of our findings in Sect. 6 verifies our hypotheses, addresses limitations, and explores the potential implications of our work. Finally, Sect. 7 concludes the paper by summarizing our contributions and suggesting directions for future research.
2 Related work
Our work is at the frontier between adaptive VR systems and cognitive science used in VR. This section presents previous work in each domain and our positioning in these fields.
2.1 Adaptive VR systems
Adaptive VR systems can be used to offer various forms of feedback across a multitude of applications (Zahabi and Abdul Razak 2020; Zuki et al. 2022). In the realm of adaptive VR, help systems aim to combine human and computer intelligence by providing relevant assistance. In the human-computer interaction field, some studies concentrate on real-time adaptive systems, such as human-attention-aware systems in self-driving cars (Gil et al. 2016). Bai et al. (2012) proposed a model and developed a framework for visualisation adaptation that would redesign visual composition based on user actions. However, this relies on explicit knowledge provided by the user about their context, which is not applicable under real-time sense-making conditions.
In VR, adaptive help systems used for training have been thoroughly investigated (Zahabi and Abdul Razak 2020). Fricoteaux et al. (2014) created an environment in which multi-modal feedback was adapted to enhance learning, based on real-time analysis of the virtual environment (smart object information, predictions of physics engine, etc.) and the user (stress level, expertise, etc.). Keeping users in an ideal zone of mental load is increasingly the aim of adaptive training (Zahabi and Abdul Razak 2020). Tsiakas et al. (2015) used a pain estimation model coupled with performance estimation to adapt exercise levels, demonstrating that the user’s physical state can also be considered when adapting the environment to the user’s task performance.
One recent study introduced a task-blind help system for navigation. The authors of Alghofaili et al. (2019) trained a long short-term memory (LSTM) classifier on user gaze sequences to predict a user’s need for navigation. Their system, without knowing the user’s destination, can provide navigation indications (such as turn left or right) based on the user’s visual exploration of the environment. However, navigation assignments are very specific and the system is aware that users have to navigate through the environment, which is a form of task knowledge.
The systems presented here are designed for expert users and, to some extent, are task-based. To the best of our knowledge, there are no other works that try to provide help with a task-blind approach only. Previous works that solely rely on a human-behaviour approach aim to induce specific emotions in users rather than creating a help system (details in Sect. 2.2).
Our positioning is to use adaptive VR techniques to design an effective adaptive help system for virtual environment exploration. We aim to explore task-blind approaches since task-based methods are extensively addressed (particularly in the learning sector) and can be very specific due to the data input they require, i.e. task knowledge.
2.2 Cognitive VR
Studying humans‘ affective and cognitive states (ACS) is an important part of VR research. Affective states are mostly derived from personality traits and personal experiences and are related to primordial impulses (Barrett 2017). They affect decision-making and can be influenced, for example, by tales, narrative content, and preferences. Cognition refers to humans’ ability to interpret and make sense of their surroundings (Willett et al. 2022). Examples of factors that can influence cognitive states include the inherent difficulty of a task, the quantity of distractions, and the format used to give instructions (Luong et al. 2022). These two types of states, affective and cognitive, are widely studied to provide more immersive and impactful environments by using physiological sensors to collect real-time data from users (Barathi et al. 2020; Peifer et al. 2014; Bian et al. 2016). Virtual environments can be modulated in a more or less transparent way in order to adapt to the user’s affective and cognitive state (Zahabi and Abdul Razak 2020). Luong et al. (2022) summarised the current use of affect and cognition in a VR context. The authors showed that studying human behaviour in VR can have multiple purposes: to induce a specific ACS in the user, to recognise the presence of an ACS in the user (in real time or before or after an experience), or to exploit the presence of an ACS in the user (in real time or before or after an experience). As stated by Norman (1993), “the power of the unaided mind is highly overrated. Without external aids, memory, thought and reasoning are all constrained”. Following the work of Card (1999), we think that VR as Information Visualisation is “just about exploiting the dynamic, interactive, inexpensive medium of graphical computers to devise new external aids that enhance cognitive abilities”. We use behavioural methods (i.e.tracking of users behaviours) to analyse ACS and offer relevant assistance in order to enhance cognitive abilities such as attention and recall.
In this work, we aim to build a help system that uses human behaviour in the form of real-time recognition and exploitation of ACS for attention and recall enhancement.
Attention is a complex process that is related to perception, memory, and action. Spatial attention is closely connected to eye movements (Hoffman and Subramaniam 1995). Vortmann et al. studied the classification of attentional states in depth (Vortmann and Putze 2021; Vortmann et al. 2021). Attention can be influenced by using Subtle Gaze Direction (SGD), as shown by Bailey et al. (2009) in a desktop setup. The SGD method uses eye tracking along with minor visual modifications to direct user gaze inside a scene. The modifications occur in the periphery of the field of view, but they disappear before the viewer can examine them with high acuity foveal vision. This technique has since been improved using visual saliency (Sridharan and Bailey 2015) and is also used in a VR context (Grogorick et al. 2017; Paul and Ragan 2020).
Rothe et al. (2018) studied the use of SGD for detail recall in 360\(^\circ \) cinematic content, using a head-mounted display equipped with an eye-tracking system. They showed that attention and recall are connected and assessed the effect of SGD on cognitive mechanisms. However, recall was mainly assessed using questionnaires before and after the experiment. Exploration has been shown to increase understanding and recall of a scene (Gagnon et al. 2018). The use of VR in court and police investigation has been shown to be fairly promising (Engström 2018; van Gelder et al. 2014; Reichherzer et al. 2021). In this field, recall and exploration of events is important. In particular, Reichherzer et al. thoroughly studied understanding and recall in crime scene scenarios (Reichherzer et al. 2022; Rothe et al. 2018).
In summary, help systems in VR mainly use a task-based approach. In an exploration context, a task-blind approach based on cognitive activity can facilitate design and be applicable to a greater number of situations since it is less task-dependent.
3 System design
In this section, we describe the implementation of our use case, the exploration of a theft crime scene in VR. As depicted in Fig. 2, the virtual environment is a 3D reconstructed office, graciously provided by Reichherzer et al. (2018). Participants are given partial information about this crime scene and they have to explore it in order to understand what happened. They are told to explore freely, get to know the room as much as possible, and determine where all 8 stolen objects were located. The participants are randomly distributed into one of 3 groups: they either receive no help or are helped by one of our two help systems. During this VR experience, we use head, hand and eye tracking. We maintain a simple and non-invasive setup to avoid uncontrolled factors and usage issues (e.g. the use of electrodermal activity addressed in Babaei et al. (2021)). Vision is the only sensory input we considered in this experiment, thus in the rest of this paper, attention refers to visual attention. This section describes more precisely how the help systems we built (respectively using task-based and task-blind approaches) interpret and use the data collected from the user.
3.1 Data collection
The system periodically gathers information about participants’ movement and behaviour in a history log used as raw data for the algorithms. In order to implement a help system that gives visual feedback at the right moment, we need to transform this raw user data into action classes that the system can understand. Raw user data is thus parsed periodically to see if a new action has been performed (see Sect. 4.5 for the chosen frequencies). We created 4 possible actions: (1) navigation actions are triggered when participants perform significant movement without aiming at a specific zone in the room. (2) navigation_to actions are triggered when participants go to a specific zone in the room, which is divided into 6 zones (see Sect. 4.2); this action stores the zone to which the user moved. (3) look actions are triggered when participants look at a crime marker; this action stores the target the user looked at. (4) choice_confirmation actions are triggered every time participants complete an interaction with a crime marker; this action stores the user’s input.
These four actions were chosen because they correspond to the classical key user interactions in any virtual environment, i.e. Navigation, Selection and Manipulation (Bowman and Hodges 1999; Bowman et al. 2004).
Actions are logged and stored in another history log. Both logs are accessible and used by other algorithms.
The reconstructed office explored in VR (same model as in Reichherzer et al. (2018)). It shows the room where a theft occurred. 12 orange crime scene markers have been placed in the scene. 8 show the correct location of stolen objects and 4 are decoys
3.2 Help systems
The helpers displayed for the task-based and task-blind approaches are the same. They are described in Fig. 3. All helpers provided are aimed at enhancing the attention and recall cognitive abilities of the user. Their graphical design corresponds to common practices on video game and VR use cases (Dillman et al. 2018). Only one helper is displayed at a time. Helpers requested by the system are queued with a last-in first-out policy. There are 4 kinds of helpers: (1) A highlight helper (see Fig. 3a) is used to bring attention to a specific crime marker, i.e. a potential object location. (2) Multiple highlight helpers are used to bring attention to a specific zone of the room (for example the markers on the round table in Fig. 3a). (3) A link helper (see Fig. 3b) is used to provide useful information about linked objects. Participants should infer that all markers linked by the orange animated lines correspond to objects having a relationship. (4) A gaze_direction helper (see Fig. 3c) is a flickering light used in peripheral vision in order to induce a change of attention and a rotation of the participant (Bailey et al. 2009; Grogorick et al. 2017).
Types of visual helpers appearing in the scene: (a) A marker can highlight itself. Types of visual helpers appearing in the scene: (b) A marker can highlight itself link appears, connecting objects belonging to the same owner. (c) A light that redirects attention to the left or right of the user. Appears in peripheral vision, flickering at 10Hz
The decision to display a helper or not is structured using a perception, decision, action loop (Zahabi and Abdul Razak 2020), commonly used by adaptive systems in VR. The differences between the two active help systems lie only in the calculation of when, where and why a helper is displayed. Inspired by a smart object approach, both help systems use information from each object of the scene. For example, the data collected includes object position, whether the object is currently in the field of view or currently enhanced by a helper, whether the object shares some elements with other objects, etc.
We therefore consider 3 variations of our system:
- no_help:
-
There is no help system.
- behaviour:
-
Our help system has access to context and behaviour knowledge, i.e. the dark pink and green data in Fig. 1.
- task:
-
Our help system has access to context and task knowledge, i.e. the dark pink and and light pink data in Fig. 1. In order to understand a user’s journey through the tasks, this variation has access to the user’s action history but not to the behaviour analysis modules.
We describe both help system algorithms in more detail below. They implement a naive approach on purpose. This choice was made to focus on the effect of the design approaches rather than on algorithm complexity.
3.2.1 Behaviour help system
This help system is built using a task-blind approach. Therefore, the help system does not know where the stolen objects are located and provides help only on the basis of participant behaviour. Human actions are stored by the help system and used to recognise human states and behaviours. Eye movement and behavioural data are interpreted in real time (head and hand positions, body exploration, interaction with objects, etc). By carrying out a real-time analysis of human actions, the system tries to guess what the participant is doing during their exploration. The algorithm is quite simple and focuses on how the participant explores the room (Gagnon et al. 2018). Contrary to our task system, no helper is displayed during the first minute of exploration to reduce cognitive load. The user’s action history and raw history (details in Sects. 3.1 and 4.5) are parsed periodically and an intention is deduced by the system from this list of actions. At present, this help system can deduce 6 specific behaviours, i.e. if the user is focusing on a specific marker, is lost, is ignoring a part of the room, is ignoring an object, is physically static, or is visually static. For example, with successive multiple look actions to a marker, the help system considers that the participant is focusing on the marker and will display a link helper on it. The help system displays the helpers visible in Fig. 3 in order to help the participant in all these cases. In contrast to our task system (and because there is no prior knowledge about tasks, user goal and the solution of the assignment), helpers are provided even if the user shows interest in useless objects.
3.2.2 Task help system
This help system is built using a task-based approach. For our use case, several underlying tasks have been created to be used as an implicit task-knowledge block by the system. These underlying tasks are not communicated to users; they are just a translation for the system of the general assignment given to users. Tasks such as “Mark stolen statue on the bookcase marker” are given to the system and are validated with a list of actions users have to perform. In this example, the task consists of 3 actions: first look at the bookcase marker, then navigation_to the entrance zone, and finally perform a choice_confirmation on the statue. These tasks are aimed at increasing knowledge about the lost objects’ location. The system displays the helpers visible in Fig. 3 to help the participant finish the task closest to completion. The tasks created relate only to the 8 markers associated with stolen objects. Here, the system knows where the stolen objects were and focuses only on this to provide help. Because the user may not perform assignments in a precise order, the system has to understand the user’s intentions and keep track of which step of which task the user is carrying out. All other behaviours are not considered and help is provided based on an estimation of the user’s progress through the tasks. Every time an action is detected (see Sect. 3.1), the system searches to see if this action is on the list of actions for one of the tasks. If so, completion increases. Then, a new helper is provided for the next action of the most-completed task. This system does not assist in the completion of tasks that were not defined beforehand in the task-knowledge block. Helpers will not trigger on objects that are useless for the resolution of the crime scene.
4 Experiment design
The primary goal of this user study is to compare task-based and task-blind approaches from several points of view. As previously explained, we selected the use case of crime scene exploration. We used an experimental protocol similar to Reichherzer et al. (2018) in order to study participants’ narrative recall and spatial understanding of a scene in VR. The visual environment is the same, but the technique tested is completely different, as explained in Sect. 3. This section describes the experiment design in depth.
4.1 Hypotheses
A fictional theft occurred in an office. The positions of the stolen objects and understanding the scenario are defined as the main points of interest guiding the exploration. They are used to define the underlying tasks for the task-based approach, described in our task help system. However, spatial understanding of the room and attention to details are also evaluated, even if they are not directly linked to the main points of interest. Additionally, user experience, which encompasses key aspects such as a feeling of presence, immersion, and emotion, is assessed. In this exploratory work, all points of interest may be considered as major in retrospect. Therefore, the evaluation encompasses four main categories: narrative recall, spatial understanding, user experience, and attention to details of the scene. These categories are used to assess performance. In this work, performance is defined by the user’s performance across all 21 variables (see Sect. 4.4 for variables details). Categories also allow an evaluation of behavioural patterns in groups (for example by evaluating users’ interest in secondary elements of the scene). This is why our hypotheses address both behavioural and performance questions. The central question revolves around determining which help system approach provides the best trade-off between these four categories.
Our hypotheses for this study can be described as follows:
- H.1:
-
The user experience is better for theno_help system than helped groups. Here we suppose that help systems may be seen as a disturbance given the short time available to explore the environment.
- H.2-a:
-
Thebehaviour system promotes behaviours that lead to better recall processes. Here we suppose that behaviour is a valid help system for enhancing recall.
- H.2-b:
-
Thebehaviour system promotes behaviours that lead to better spatial understanding processes. Here we suppose that behaviour is a valid help system for enhancement of spatial understanding.
- H.2-c:
-
Thebehaviour system promotes behaviours that lead to better attention to details. Here we suppose that behaviour is a valid help system for enhancing attention.
- H.3:
-
Thetask system promotes behaviours that lead to better performance on implicit-task completion. Here we suppose that the use of the task system allows better performance on the main points of interest.
4.2 Experimental protocol
The experiment is performed in six phases: (1) Preparation, (2) Immediate Narrative Recall, (3) Exploration of the VR Environment, (4) Virtual Experience Questionnaire (VEQ), (5) Delayed Recall, and (6) Attention and Spatial Questionnaires. Everything takes place in the same room, and snacks are available at any time.
-
(1)
Preparation Participants are given a scenario describing a discussion between a police officer and someone filing a complaint for a theft in their office. This theft led to the loss of 8 objects belonging to the complainant and their colleague. To increase the difficulty, the questions and remarks of the police officer have been removed and participants are only presented with the complainant’s answers. Participants are invited to read the narrative as many times as necessary and understand as much of the partial information presented as possible.
-
(2)
Immediate Narrative Recall When participants feel prepared for the experiment and have a good understanding of the story, they are asked to explain the narrative from memory as part of a free recall exercise (FR). In case essential details of the narrative are missed, Cued Recall (CR) questions such as “How many people work in this office?” are asked. To assess the recall performance, participants’ ability to recount the narrative is assessed using the following scoring criteria: 2 points are awarded if they recall without a clue, 1 point if a recall question is required to prompt their memory, and 0 if they can’t recall even with the question. There were 21 key points to remember.
-
(3)
Exploration Participants are familiarised with VR by using a simple training room, allowing them to practice walking in VR and interacting with objects. Once ready, participants are given 4 min to explore the 3D reconstructed office. This duration was chosen based on previous work, long enough to provide a good comprehension of the scene, but not enough to memorise everything (Reichherzer et al. 2018). They are instructed to explore freely, get to know the room as much as possible, identify the locations of all 8 stolen objects, and nothing more. These instructions remain consistent across all participant groups. Interactive crime scene markers, including 4 decoys to introduce ambiguity, are placed in the office at the supposed locations of the stolen objects. See Sect. 4.5 and Fig. 2 for more details.
-
(4)
Virtual Experience Questionnaire (VEQ) Participants are subsequently requested to complete the VEQ to gather insights into their subjective feelings regarding usability and user experience. This questionnaire (version 2 was used here) is a unified questionnaire based on nine other existing questionnaires (PQ, ITQ, Flow4D16, CSE, AEQ, SUS, UTAUT, AttracDiff, SSQ) and has been validated to measure various facets of the user experience (Tcha-Tokey et al. 2016).
-
(5)
Delayed Recall The exercise in phase 2 is repeated. At least 30 min must have passed between the immediate and delayed recall exercises.
-
(6)
Attention and Spatial Questionnaires The final questions are about details of the scene in order to assess attention. There are 5 questions: “Estimate the dimensions of the room in square meters.”, “How many chairs were there in the room?”, “How many of them were knocked over?”, “How many bookcases were there in the room?” and “How many desks were in the room?”. For the room dimensions’ question, we did not consider more precise distance and size perception protocols (such as Lin et al. (2024); Napieralski et al. (2011)) because our goal is not to evaluate perception. This question uses the room size as one of the details of the room, thus only evaluating attention. As a final task, participants are presented with a 2D plan of the room and are required to accurately pinpoint the locations of all 8 stolen objects. A spatial understanding score is calculated from their answers. For each given answer, we overlaid the same 2D plan with the correct object locations (as illustrated in Fig. 4) to classify the answer as correct (within the green circle of 15 cm radius), guessed (outside the green circle but in the correct zone), wrong (not in the correct position) or omitted (not positioned at all). These 4 scores are described in Table 1.
4.3 Participants
The between-subjects study involved 66 participants (65.14% male, 33.3% female, and 1.51% preferred not to disclose gender), aged between 20 and 59 (average age: 33.89). Participants, predominantly students or University of Montpellier staff, were volunteers with normal or corrected-to-normal eyesight (40.9% wore glasses or contact lenses). A majority (84.8%) had minimal previous VR experience, including 39.4% reporting no previous experience. None of the participants had known immediate or long-term memory issues.
4.4 Procedure
Participants were randomly assigned to one of three groups: no_help, behaviour, and task. The protocol was exactly the same for each group, and no participants were aware of the implicit tasks the task system used. There were therefore 22 repetitions for each of the three groups. During VR exploration (phase 3), the behaviour and task groups received visual helpers from the environment. no_help was the control group; the participants’ actions were logged but they did not receive any visual help from the system.
Throughout both recall phases (2 and 5), exploration (phase 3), VEQ (phase 4), and the final questionnaires (phase 6), all answers were collected and translated into scores. These scores assess 21 independent variables used for the analysis of 4 categories: narrative recall, spatial understanding, user experience, and attention to details of the scene. These categories are directly related to our hypotheses as described in Sect. 4.1. The variables and the use thereof are summarised in Table 1.
This experiment was approved by the ethics committee of the University of Montpellier, advisory board notice N\(^\circ \)UM2022-010.
4.5 Experimental setup
As stated before, the virtual environment comes from the VR part of Reichherzer et al. in (2018). A \(6.0 \times 5.0 \, {\text{m}}^{2}\) office was reconstructed and computed in a 293,811 polygon mesh, as shown in Fig. 2. The headset used was a Pico Neo 3 Pro EyeFootnote 1 with a built-in eye tracking system from TobiiFootnote 2 at a maximum sampling rate of 90Hz. Participants could walk freely in a \(6.0 \times 6.0 \, {\text{m}}^{2}\) zone and no other locomotion system was implemented. The only interaction they had in the room was with the crime markers as described in Fig. 5. The interaction was the same across all three groups and allowed users to make a proposition as to where the stolen objects were located. The helpers were passive; no interaction was required. When they were ready, participants started the experiment and were transported to the reconstructed office. During this phase, head position, hand position, gaze direction, gaze target, gaze hit point and whether or not a helper is currently displayed were logged at a frequency of 10 Hz. This raw data from the user was parsed every 2 s to see if a new action had been performed. For the behaviour group, action history was parsed every 2 s to detect intention. For the gaze_direction helper, a flickering frequency of 10 Hz was chosen based on SDG literature (Bailey et al. 2009; Rothe et al. 2018). The experiment logic, log system and VR system were implemented in Unity,Footnote 3 editor version 2021.2.8f1.
5 Results
This section presents the most compelling results obtained from data analysis and pattern observation in data plots. Due to the large number of variables (see Table 1 for details) and the complexity of the results, we provide a brief summary of the findings. Each of the four categories is explored, followed by a discussion of the behavior observed in each group.
5.1 Analysis methodology
For the statistical analysis, Matlab R2021b was used with the Statistics and Machine Learning Toolbox™ Version 12.2. Normality was assessed using the Shapiro-Wilk test. Differences between groups were assessed in a three-tier setting (using all three groups: task, behaviour, no_help) as well as in two-tier settings, with two groups in each case. When the assumptions of the one-way Analysis of Variance (ANOVA) were met, the ANOVA test was employed; otherwise, the Kruskal-Wallis non-parametric test was used. In the three-tier setting, p-values of significance were adjusted using the Bonferonni correction. The Pearson product-moment correlation coefficient was computed to determine linear correlations between variables. The statistical significance was set to 0.05 for all the tests.
Results for the Spatial Understanding category. Bar plots of means and standard deviation for each variable. Asterisks show significant differences between groups (see Sect. 5.3). Dark pink is behaviour, light pink is task, and green is no_help
Results for the User Experience category. Box plots for each variable. Asterisks show significant differences between groups (see Sect. 5.3). Dark pink is behaviour, light pink is task, and green is no_help
Results for the Attention to details category. Bar plots of means and standard deviation for each variable. Asterisks show significant differences between groups (see Sect. 5.3). Dark pink is behaviour, light pink is task, and green is no_help
5.2 Similarities
Several variables showed no significant differences between groups. These results are quite interesting; we report them below and discuss them in Sect. 6.
5.2.1 Narrative recall
This was assessed using a \(2 \times 3\) mixed-measure ANOVA. The independent factor was the group and the repeated measurement was taken at two time points (instant_recall_score & delayed_recall_score). There were no differences in overall performance between groups (\(F = {0.934}, p = {0.398}\)). Figure 6 gives more insight into distribution. behaviour (in dark pink) seems to have more uniform results in narrative recall, showing less variation in final scores, with closer quartiles than task and no_help. All groups experienced a significant improvement in understanding after being exposed to the VR scene \((F = {2390}, p < 0.001),\) which agrees with the results obtained in Reichherzer et al. (2018).
5.2.2 Spatial understanding
No significant differences were found for the knowledge and guess variables. This again shows no differences in overall spatial understanding performance between groups. We can see that delayed_recall_score is linearly correlated with knowledge \((r = {0.406}, p = {0.000645}),\) which confirms the Narrative Recall results for overall performance.
5.2.3 User eXperience
There were no significant differences for Engagement, Immersion, Flow, Judgement and Experiment Consequences in the VEQ variables.
5.2.4 Attention to details
Each variable of this category has a correct answer described in Table 1, so we represented participants’ answers in Fig. 9 by calculating the normalised distance to that correct answer instead of the raw value. It allows easier reading, uniform scale and quick visualisation of overestimates or underestimates. Distance has been calculated using this formula:
There were no significant differences for the room_size, desks, and knocked_chairs variables.
5.3 Differences
In this section we report some key variables that show statistically significant differences between groups.
5.3.1 Spatial understanding
Two significant differences were found between the behaviourand task groups (\(p = {0.0351}\) for omissions in the two-tier model; \(p = {0.0347}\) for inventions in the three-tier model). The behaviour group had the most inventions and the least omissions in their responses, whereas the task group had the opposite behaviour, see Fig. 7.
5.3.2 User eXperience
The no_help group was significantly different from the behaviour group for Emotion and Technology Adoption (\(p = {0.0458}\) and \(p = {0.0248}\), respectively, both in the two-tier model). It was also different from task for Presence (\(p = {0.0237}\) in the two-tier model). Every time, no_help performed better, which suggests that the helpers may have disrupted the user experience for both helped groups, see Fig. 8.
5.3.3 Attention to details
The behaviour group was significantly different from both the task and no_help groups for the chairs and bookcases estimations (for the task group, \(p = {0.0458}\) and \(p = {0.0240}\), respectively, in the two-tier model; for the no_help group, respectively, \(p = {0.0127}\) in the three-tier model and \(p = {0.0165}\) in the two-tier model). behaviour performed the worst in the chairs estimation but performed the best in the bookcases estimation, see Fig. 9.
5.4 Behaviour analysis
These differences and similarities between groups for all our variables (see Table 1 for the list of variables) allow us to understand and highlight some strategies specific to each helped group. The results do not allow any conclusion as to a potential specific strategy used by participants in the no_help group.
5.4.1 Exploration strategy
One main difference between the participants’ behaviour in the behaviour group and that of the other groups is the understanding of bookcase and chair placement. behaviour has a significantly better score for the bookcases variable, but a significantly worse score for chairs. This can be explained by the positioning of the chairs in the experiment. As visible in Fig. 4, all the chairs were close to one or multiple correct crime markers. Since the behaviour system is not aware of the correct crime markers, participants in the behaviour group were directed to all the crime markers in the room, even those used as decoys. Therefore, users in this group were the most distracted away from the correct locations by helpers, and thus also the most distracted away from the details around them such as chairs, focusing on less-important points of interest such as bookcase location. These results can be explained by an exploration strategy based on secondary points of interests and occurred only in the behaviour group.
5.4.2 Reporting strategies
We do note two opposite strategies for the answers given in the spatial understanding category: Participants in the task group had more omissions in order to reduce inventions and participants in the behaviour group increased inventions and reduced omissions. In terms of the specific exploration strategy, these "errors" from the behaviour group can also be interpreted another way: The new locations they highlighted, despite being errors with regard to the scenario, may be points of interest unnoticed otherwise. Two different and clear strategies come to light here: Reporting only what they are sure of (used by participants in the task group), and reporting all points of interest even if they may be errors (used by participants in the behaviour group).
6 Discussion
In this section, we present our conclusions for our task-blind adaptive VR system. To summarise, our study (1) gives users partial information and general assignments (“explore freely”, “know the room” and “determine the position of stolen objects”), (2) is designed to evaluate performance for these implicit tasks, and (3) evaluates user behaviour during the thinking process. To draw conclusions about our independent factor (no_help, task and behaviour help systems), we clearly differentiate performance and behaviour in this discussion.
6.1 Verification of hypotheses
In this section, the results from Sect. 5 are brought together to draw conclusions about the hypotheses defined in Sect. 4.1.
- H.1:
-
The user experience is better for theno_help system than helped groups. The overall performance of the no_help group in terms of User eXperience variables compared to the other groups validates H.1. This was expected and can be explained by the immersion-breaking nature of the helpers. For example, some users reported that they did not clearly understand the purpose of some of the helpers due to the short time limit, which reduces immersion in the environment. However, the results do not allow any conclusion as to which help system is better, task or behaviour (see Fig. 8 and Sect. 5.3).
- H.2-a:
-
Thebehaviour system promotes behaviours that lead to better recall processes. Overall, H.2-a is inconclusive because there are no significant differences between the groups in narrative recall performance after VR exposure. When looking at the distributions and tendencies in Fig. 6, behaviour shows less variation in the final scores, with closer quartiles for post-experiment measurements, which is not conclusive but is still promising.
- H.2-b:
-
Thebehaviour system promotes behaviours that lead to better spatial understanding processes. The behaviour group’s performance in the spatial understanding category shows specific exploration and reporting strategies (see Sect. 5.4) compared to the other groups. This behaviour does not provide a clear improvement in performance, but the exploration strategy does give a different understanding of spatial elements in the scene, and the reporting strategy highlights that understanding. This behaviour, compared to that of the no_help and task groups, validates H.2-b.
- H.2-c:
-
Thebehaviour system promotes behaviours that lead to better attention to details. The behaviour group shows a specific exploration strategy (see Sect. 5.4) that outperformed other groups in the attention to details category, validating H.2-c.
- H.3:
-
Thetask system promotes behaviours that lead to better performance on implicit-task completion. Overall, H.3 is inconclusive because neither of the 2 helped groups can be designated as better than the other in terms of performance, even when taking into account distributions and tendencies (Figs. 6 and 7). The behaviours are clearly different, but the performance does not follow. We give more details about the similarities in performance in Sect. 6.3. However, the fact that H.3 is inconclusive means that task-blind help systems have a promising future since they have the advantage of changing behaviour without reducing performance (more details in Sect. 6.2).
6.2 Takeaways
Our results provide a first insight into what task-blind systems can do and how we can design and evaluate them. Even though helpers reduced immersion (H.1), our help systems, task and behaviour, significantly changed user behaviour in terms of their exploration and reporting strategies and their understanding of the scene (Sect. 5.4, H.2-b and H.2-c). Moreover, each help approach was evaluated as performing as well as the others (H.2-a and H.3), which shows that our task-blind approach did not reduce performance with respect to the traditional approach. Overall, this is very promising and shows that task-blind adaptive VR can exist and can be implemented. This clearly opens up new possibilities in help system design by putting task knowledge in perspective, more particularly in a context in which task definitions are difficult or limit users’ behaviour. Some use cases examples can include sandbox environments, environments that need the user’s creativity, expertise or subjective point of view, or any design that wants to give entire freedom to users. However, this study also raises questions about performance as a metric and its evaluation. In our study, performance is not clearly affected by our help systems (similar results in NR and SU categories), even though they clearly have an impact on behaviour, UX and AD. We highlight this difference between behaviour and performance on purpose because it is important in VR system design to decide how and why to help a user. For example, in some applications, user behaviour can be more important than performance (e.g., in a social context, for immersion and storytelling, in a learning context, etc), and vice versa. There are also applications in which given an equal level of user performance, designers will prefer promoting a specific behaviour. By offering the possibility of influencing behaviour without altering performance (similar results to Reichherzer et al. (2018) for example), this task-blind approach opens up a promising research field and provides the opportunity for a trade-off in adaptive VR designs. Designers and developers can help users explore what the system already knows or explore the scene differently. Depending on the exploration strategy designers want users to adopt, each approach can be effective and focuses on different aspects of the scene without negatively affecting performance.
6.3 Limitations
Even though our behaviour system induces a behaviour more focused on details of the scene, our main limitation is that the results do not show major differences between groups in terms of performance, despite promising distributions. This raises multiple questions that need to be addressed for the future of task-blind adaptive VR. First, were the helpers helpful? One missing point in our protocol was to ask users to evaluate the usefulness of the helpers. Having user feedback on the helpers after the experiment could give more information on performance differences, help system perception and impact. However, we showed that our systems affect behaviours, so we know that the helpers, and therefore our help systems, were not useless. A future experiment must then focus on the following question: How can task-blind adaptive VR lead to improved performance? A first possibility could be to explore more in-depth helper triggering. In this study, the only difference between task and behaviour was the timing of helper triggering, with other variables being the same between groups. This gives the opportunity, as future work, to study the correlation between task-blind helper triggering and help system performance (or help system capacity to change behaviour). Similarly, it would be interesting to examine what could be the characteristics of an efficient task-blind approach, and verify whether the individual differences of cognitive abilities affect efficiency. An interesting direction for future work would be to provide a better understanding thereof or a formal model. In this discussion, we try to provide multiple research questions that could contribute to advancing the field of task-blind adaptive VR.
7 Conclusion
In this article, we presented a study that introduces a new way of designing help systems in VR, using a task-blind adaptive VR system. We compared the usual task-based approach to this new approach in a user study conducted with 66 participants. Our results suggest that both approaches are valid and induce two significantly different behaviours, respectively helping with main and secondary points of interests. This gives more flexibility in help system design by bringing to light a trade-off between planned performance and planned behaviour. We showed that it is possible to build a task-blind system that induces novel behaviours without negatively impacting performance. This provides multiple opportunities for future work, particularly on (1) the impact of task-blind help systems on users’ performance; (2) the structure of effective task-blind help systems, for example in terms of the nature of the helpers and the triggering characteristics; and (3) the cognitive processes targeted by help systems. Indeed, here we focused on attention and recall, but future research can address the enhancement of other cognitive processes.
References
Alghofaili R, Sawahata Y, Huang H, Wang H-C, Shiratori T, Yu L-F (2019) Lost in style: gaze-driven adaptive aid for VR navigation. In: Proceedings of the 2019 CHI conference on human factors in computing systems. ACM, Glasgow Scotland,, pp 1–12. https://doi.org/10.1145/3290605.3300578
Babaei E, Tag B, Dingler T, Velloso E (2021) A critique of electrodermal activity practices at CHI. In: Proceedings of the 2021 CHI conference on human factors in computing systems. ACM, Yokohama, pp 1–14. https://doi.org/10.1145/3411764.3445370
Bai X, White D, Sundaram D (2012) Contextual adaptive knowledge visualization environments. Electron J Knowl Manag 10(1):1–14114
Bailey R, McNamara A, Sudarsanam N, Grimm C (2009) Subtle gaze direction. ACM Trans Graphics 28(4):1–14. https://doi.org/10.1145/1559755.1559757
Baker C, Fairclough SH (2022) Chapter 9 - adaptive virtual reality. In: Fairclough SH, Zander TO (eds) Current research in neuroadaptive technology. Academic Press, Cambridge, pp 159–176. https://doi.org/10.1016/B978-0-12-821413-8.00014-2
Barathi SC, Proulx M, O’Neill E, Lutteroth C (2020) Affect recognition using psychophysiological correlates in high intensity VR exergaming. In: Proceedings of the 2020 CHI conference on human factors in computing systems. ACM, Honolulu, pp 1–15. https://doi.org/10.1145/3313831.3376596
Barrett LF (2017) How emotions are made: the secret life of the brain. Pan macmillan edn. Houghton Mifflin Harcourt, Boston, New York
Bian Y, Yang C, Gao F, Li H, Zhou S, Li H, Sun X, Meng X (2016) A framework for physiological indicators of flow in VR games: construction and preliminary evaluation. Pers Ubiquit Comput 20(5):821–832. https://doi.org/10.1007/s00779-016-0953-5
Bowman DA, North C, Chen J, Polys NF, Pyla PS, Yilmaz U (2003) Information-rich virtual environments: theory, tools, and research agenda. In: Proceedings of the ACM symposium on virtual reality software and technology. VRST ’03. Association for Computing Machinery, New York, pp. 81–90. https://doi.org/10.1145/1008653.1008669
Bowman DA, Hodges LF (1999) Formalizing the design, evaluation, and application of interaction techniques for immersive virtual environments. J Visual Languages Comput 10(1):37–53. https://doi.org/10.1006/jvlc.1998.0111
Bowman D, Kruijff E, LaViola JJ Jr, Poupyrev IP (2004) 3D user interfaces: theory and practice. Addison-Wesley, Boston
Card S, Mackinlay J, Shneiderman B (1999) Readings in information visualization: using vision to think
Cowan N, Bao C, Bishop-Chrzanowski BM, Costa AN, Greene NR, Guitard D, Li C, Musich ML, Ünal ZE (2024) The relation between attention and memory. Ann Rev Psychol 75:183–214. https://doi.org/10.1146/annurev-psych-040723-012736
Dillman KR, Mok TTH, Tang A, Oehlberg L, Mitchell A (2018) A visual interaction cue framework from video game environments for augmented reality. In: Proceedings of the 2018 CHI conference on human factors in computing systems. ACM, Montreal QC, pp 1–12. https://doi.org/10.1145/3173574.3173714
Engström P (2018) Virtual reality for crime scene visualization. In: Three-dimensional imaging, visualization, and display 2018, vol 10666. SPIE, Orlando, FL, pp 102–108. https://doi.org/10.1117/12.2304653
Ens B, Bach B, Cordeil M, Engelke U, Serrano M, Willett W, Prouzeau A, Anthes C, Büschel W, Dunne C, Dwyer T, Grubert J, Haga JH, Kirshenbaum N, Kobayashi D, Lin T, Olaosebikan M, Pointecker F, Saffo D, Saquib N, Schmalstieg D, Szafir DA, Whitlock M, Yang Y (2021) Grand challenges in immersive analytics. In: Proceedings of the 2021 CHI conference on human factors in computing systems. ACM, Yokohama Japan, pp. 1—17. https://doi.org/10.1145/3411764.3446866
Fischer G (2001) User modeling in human-computer interaction. User Model User-Adap Inter 11(1):65–86. https://doi.org/10.1023/A:1011145532042
Fricoteaux L, Thouvenin I, Mestre D (2014) GULLIVER: a decision-making system based on user observation for an adaptive training in informed virtual environments. Eng. Appli. Artif. Intell. 33:47–57. https://doi.org/10.1016/j.engappai.2014.03.005
Gagnon KT, Thomas BJ, Munion A, Creem-Regehr SH, Cashdan EA, Stefanucci JK (2018) Not all those who wander are lost: spatial exploration patterns and their relationship to gender and spatial memory. Cognition 180:108–117. https://doi.org/10.1016/j.cognition.2018.06.020
Gil M, Pelechano V, Fons J, Albert M (2016) Designing the human in the loop of self-adaptive systems. In: García CR, Caballero-Gil P, Burmester M, Quesada-Arencibia A (eds) Ubiquitous computing and ambient intelligence. Lecture notes in computer science. Springer, Cham, pp 437–449. https://doi.org/10.1007/978-3-319-48746-5_45
Grant SC, Magee LE (1998) Contributions of proprioception to navigation in virtual environments. Hum Factors 40(3):489–497. https://doi.org/10.1518/001872098779591296
Grogorick S, Stengel M, Eisemann E, Magnor M (2017) Subtle gaze guidance for immersive environments. In: Proceedings of the ACM symposium on applied perception. ACM, Cottbus, pp 1–7. https://doi.org/10.1145/3119881.3119890
Hoffman JE, Subramaniam B (1995) The role of visual attention in saccadic eye movements. Percept Psychophys 57(6):787–795. https://doi.org/10.3758/BF03206794
Hoover M, Winer E (2021) Designing adaptive extended reality training systems based on expert instructor behaviors. IEEE Access 9:138160–138173. https://doi.org/10.1109/ACCESS.2021.3118105
Kozhevnikov M, Li Y, Wong S, Obana T, Amihai I (2018) Do enhanced states exist? Boosting cognitive capacities through an action video-game. Cognition 173:93–105. https://doi.org/10.1016/j.cognition.2018.01.006
Lin W-Y, Venkatakrishnan R, Venkatakrishnan R, Babu SV, Pagano C, Lin W-C (2024) An empirical evaluation of the calibration of auditory distance perception under different levels of virtual environment visibilities. In: 2024 IEEE conference virtual reality and 3D user interfaces (VR). pp 690–700. https://doi.org/10.1109/VR58804.2024.00089
Luong T, Lecuyer A, Martin N, Argelaguet F (2022) A survey on affective and cognitive VR. IEEE Trans Visual Comput Graphics 28(12):5154–5171. https://doi.org/10.1109/TVCG.2021.3110459
Napieralski PE, Altenhoff BM, Bertrand JW, Long LO, Babu SV, Pagano CC, Kern J, Davis TA (2011) Near-field distance perception in real and virtual environments using both verbal and action responses. ACM Trans Appl Percept 8(3):18–11819. https://doi.org/10.1145/2010325.2010328
Norman DA (1993) Things that make us smart: defending human attributes in the age of the machine. Addison-Wesley Longman Publishing Co., Inc, USA
Paul DJ, Ragan ED (2020) Subtle gaze direction with asymmetric field-of-view modulation in headworn virtual reality. In: 2020 IEEE conference on virtual reality and 3D user interfaces abstracts and workshops (VRW). pp 569–570. https://doi.org/10.1109/VRW50115.2020.00136
Peifer C, Schulz A, Schächinger H, Baumann N, Antoni CH (2014) The relation of flow-experience and physiological arousal under stress– Can u shape it? J Exp Soc Psychol 53:62–69. https://doi.org/10.1016/j.jesp.2014.01.009
Pohl RF (ed) (2022) Cognitive illusions: intriguing phenomena in thinking, judgment, and memory, 3rd edn. Routledge, London. https://doi.org/10.4324/9781003154730
Reichherzer C, Cunningham A, Walsh J, Kohler M, Billinghurst M, Thomas BH (2018) Narrative and spatial memory for jury viewings in a reconstructed virtual environment. IEEE Trans Visual Comput Graphics 24(11):2917–2926. https://doi.org/10.1109/TVCG.2018.2868569
Reichherzer C, Cunningham A, Barr J, Coleman T, McManus K, Sheppard D, Coussens S, Kohler M, Billinghurst M, Thomas BH (2022) Supporting jury understanding of expert evidence in a virtual environment. In: 2022 IEEE conference on virtual reality and 3d user interfaces (VR). IEEE, Christchurch. pp 615–624. https://doi.org/10.1109/VR51125.2022.00082
Reichherzer C, Cunningham A, Coleman T, Cao R, McManus K, Sheppard D, Kohler M, Billinghurst M, Thomas BH (2021) Bringing the jury to the scene of the crime: Memory and decision-making in a simulated crime scene. In: Proceedings of the 2021 CHI conference on human factors in computing systems. ACM, Yokohama, pp 1–12. https://doi.org/10.1145/3411764.3445464
Rothe S, Althammer F, Khamis M (2018) GazeRecall: Using gaze direction to increase recall of details in cinematic virtual reality. In: Proceedings of the 17th international conference on mobile and ubiquitous multimedia. MUM 2018. ACM, New York, pp 115–119. https://doi.org/10.1145/3282894.3282903
Sadeghian S, Hassenzahl M (2021) From limitations to “Superpowers”: a design approach to better focus on the possibilities of virtual reality to augment human capabilities. In: Designing interactive systems conference 2021. ACM, Virtual Event USA. pp 180–189. https://doi.org/10.1145/3461778.3462111
Skarbez R, Polys NF, Ogle JT, North C, Bowman DA (2019) Immersive analytics: theory and research agenda. Front Robot AI 6:82. https://doi.org/10.3389/frobt.2019.00082
Sridharan S, Bailey R (2015) Automatic target prediction and subtle gaze guidance for improved spatial information recall. In: Proceedings of the ACM SIGGRAPH symposium on applied perception. ACM, Tübingen, pp 99–106. https://doi.org/10.1145/2804408.2804415
Tcha-Tokey K, Christmann O, Loup-Escande E, Richir S (2016) Proposition and validation of a questionnaire to measure the user experience in immersive virtual environments. Int J Virtual Reality 16(1):33–48. https://doi.org/10.20870/IJVR.2016.16.1.2880
Tsiakas K, Huber M, Makedon F (2015) A multimodal adaptive session manager for physical rehabilitation exercising. In: Proceedings of the 8th ACM international conference on pervasive technologies related to assistive environments. ACM, Corfu Greece. pp 1–8. https://doi.org/10.1145/2769493.2769507
van Gelder J-L, Otte M, Luciano EC (2014) Using virtual reality in criminological research. Crime Sci 3(1):10. https://doi.org/10.1186/s40163-014-0010-5
Venkatakrishnan R, Venkatakrishnan R, Raveendranath B, Canales R, Sarno DM, Robb AC, Lin W-C, Babu SV (2024) The effects of secondary task demands on cybersickness in active exploration virtual reality experiences. IEEE Trans Visual Comput Graphics 30(5):2745–2755. https://doi.org/10.1109/TVCG.2024.3372080
Vortmann L-M, Putze F (2021) Combining implicit and explicit feature extraction for eye tracking: attention classification using a heterogeneous input. Sensors 21(24):8205. https://doi.org/10.3390/s21248205
Vortmann L-M, Knychalla J, Annerer-Walcher S, Benedek M, Putze F (2021) Imaging time series of eye tracking data to classify attentional states. Front Neurosci. https://doi.org/10.3389/fnins.2021.664490
Willett W, Aseniero BA, Carpendale S, Dragicevic P, Jansen Y, Oehlberg L, Isenberg P (2022) Perception! immersion! empowerment!: superpowers as inspiration for visualization. IEEE Trans Visual Comput Graphics 28(1):22–32. https://doi.org/10.1109/TVCG.2021.3114844
Zahabi M, Abdul Razak AM (2020) Adaptive virtual reality-based training: a systematic literature review and framework. Virtual Reality 24(4):725–752. https://doi.org/10.1007/s10055-020-00434-w
Zuki FSM, Merienne F, Sulaiman S, Ricca A, Rambli DRA, Saad MNM (2022) Gamification, sensory feedback, adaptive function on virtual reality rehabilitation: a brief review. In: 2022 international conference on digital transformation and intelligence (ICDI). pp 330–335. https://doi.org/10.1109/ICDI57181.2022.10007124
Acknowledgements
The authors wish to thank all the study participants and Carolin Reichherzer for the discussions about her interesting work and for allowing us to use her 3D reconstruction model.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Besga, S., Rodriguez, N., Sallaberry, A. et al. Task-blind adaptive virtual reality: Is it possible to help users without knowing their assignments?. Virtual Reality 29, 35 (2025). https://doi.org/10.1007/s10055-025-01100-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10055-025-01100-9